14

Hmm. So have you ever argued in a job interview? Like really standing your ground? In a technical interview?

Today I had a live coding session with a company I'm interested in. The developer was giving me tasks to evolve the feature on and on.

Everything was TDD. Splendid!

However at one point I had to test if the outcome of the method call is random. What I did is basically:
```
Provider<String> provider = new SomeProvider("aaa", "bbb", "ccc", "ddd", "eee", "fff")

for(int i=0; i<100; i++) {
String str = provider.get();
map.put(str, incrementCount(str));
}
Set<Integer> occurences = new HashSet(map.values());
occurences.removeIf(o -> o.equals(occurences.get(0)));
assertFalse(occurences.empty());
```
and I called it good enough, since I cannot verify true randomness.
But the dev argued that this is not enough and I must verify whether the output is truly random or not, and the output (considering the provider only has a finite set of values to return) occurences are almost equal (i.e. the deviation from median is the median itself).

I argued this is not possible and it beats the core principle of randomness -- non-determinism. Since if you can reliably test whether the sequence is truly random you must have an algorithm which determines what value can or cannot be next in the sequence. Which means determinism. And that the (P)RNG is then flawed. The best you can do is to test whether randomness is "good enough" for your use case.

We were arguing and he eventually said "alright, let's call it a good enough solution, since we're short on time".

I wonder whether this will have adverse effect my evaluation . So have you ever argued with your interviewer? Did it turn out to the better or to the worse?

But more importantly, was I right? :D

Comments
  • 6
  • 3
    What was his solution then?
  • 2
    @alexbrooklyn He didn't suggest me one
  • 3
  • 8
    No. If you toss a coin a million times, heads should be around 500k. If it's way off, you know your coin is likely not really random. Even still, if you happened to have tossed 999k heads with a truly balanced coin, the next toss will still be 50%.
  • 2
    It's ugly because no matter what you do your test will always have a fail rate, but that's just how statistics works.
  • 8
    Well the dev was right, there are test suites that check for various statistical properties that have to be satisfied. Uniformity is only one point out of many, and only checking a 100 rolls is far too little anyway.

    Thinking of test suites like Diehard(er).
  • 0
    Are at least the random factors saved?
  • 0
    @Lor-inc Around 50% - yes, I agree. But can you define that "around"? :) But I find it very unlikely to have an equal distribution.

    That is what I meant. The problem with probabilities you can never be certain. And if you write a unittest that relies on probabilities you will have randomly ( :) ) failing CI pipelines.

    IMO it would be an option to define an SLA for deviation from 50% that is "good enough" and anything breaching that level would make a test fail (ruling the RNG/PRNG as degraded, not suitable). Please do correct me if you feel I'm wrong.

    But I doubt this is a job for a 20 minutes coding session :)
  • 2
    @netikras Of course you can define "around". The easiest test with N coin tosses is that the expected value is N/2, and 2 sigma confidence interval (95%) means that a +/-sqrt(N) interval around N/2. Totally doable in a 20 minutes test.
  • 1
    @netikras In the case of the coin, the amount of heads in a million tosses will give you something close to [normal distribution](https://en.wikipedia.org/wiki/...). You are free to choose how "strict" you want your tests to be. The stricter the test, the more false positives and the less false negatives. Ideally you'd probably want the tolerance to equal the [deviation](https://en.wikipedia.org/wiki/...) of your [distribution](https://en.wikipedia.org/wiki/...), as this gives the best ratio of false positives to actual faulty RNGs.
  • 0
    Our industry is built around the concept of determinism. We are grossly unprepared even for basic cases of randomness, yet quantum computing is right behind the corner.
  • 2
    @Lor-inc I guess anything like throwing in questions about the required confidence interval and false-positives vs. false-negatives would have already have passed that question.

    The answer from @netikras just failed. The dev only kinda agreed to get to an end because it was obvious that the applicant didn't have even basic knowledge in statistics - and because it was probably not part of the job requirements anyway.
  • 0
    @Fast-Nop alright, what would be that "almost" for 100 items in the data set then?
  • 0
    @Fast-Nop I guess you're right :)
  • 1
    @netikras SQRT(100)=10, N/2=50, so anything from 40 to 60 for a binary coin toss would count as random with 95% confidence. Pretty loose, that is.

    Note that in order to have any significance, N must be big in comparison to SQRT(N). An order of magnitude is the absolute minimum, so some degree of significance barely starts at 100, but with a pretty big interval (20%).

    Same calculation for N=10000 would be anything from 4900 to 5100, i.e. only 2% relative to N.
  • 1
    @netikras The easy take-away for coin tosses is +/-SQRT(N) as rule of thumb.
  • 2
    I think I see how to test for randomness. Why check for randomness? Is this for some kind of business logic to determine if something based upon market forces versus nature?
  • 1
    @Demolishun If you run simulations that involve noise, and the RNG is crap, then this can fuck up your simulations to the point where the results are more or less just RNG artifacts.

    Even worse for crypto stuff. A bad RNG can make shit so easy to crack that someone like Bruce Schneier could do it between two cups of coffee for breakfast.
  • 1
    @Demolishun It was TDD and the feature had to have 2 versions: 1 -- sequential and 2 -- random :) So I had to somehow show the interviewer with test asserts that versions are working as expected.
  • 0
    📌
Add Comment