In the HN comment that the article discusses [0] is the conclusion that commenter a1369209993 is correct (there are as many between 0 & 1 as 1 & +INF) and llm_trw is not correct? I got a bit confused.
Also, the article links to a blog by Daniel Lemire [1] in which he says (with regard to producing an unbiased random float) "picking an integer in [0,2^32) at random and dividing it by 2^32, was equivalent to picking a number at random in [0,1)" is incorrect and there is a ratio of up to 257:1 in the distribution so obtained. Not wanting to disagree with Daniel Lemire but I can't see why, and a quick experiment in Python didn't give this ratio.
The blog post explained it perfectly. There are 2^32 integers when you pick from [0,2^32). But there are 0x3f800000 floating point numbers between 0 and 1. And the former number is not divisible by the latter number. Therefore using division by 2^32 cannot be unbiased.
It's helpful if you first look at smaller examples. If we were to generate random integers in [0,10) by first generating random integers in [0,50) and then dividing by 5, that's valid. Exactly 5 numbers get mapped to one number each: the numbers [0,5) get mapped to 0, [5,10) get mapped to 1 etc. But what if you do the same division trick if you instead want to get numbers in [0,3)? Do you do the same division trick? Then the probability of the number 2 appearing is less than that of 0 or 1.
Also, the article links to a blog by Daniel Lemire [1] in which he says (with regard to producing an unbiased random float) "picking an integer in [0,2^32) at random and dividing it by 2^32, was equivalent to picking a number at random in [0,1)" is incorrect and there is a ratio of up to 257:1 in the distribution so obtained. Not wanting to disagree with Daniel Lemire but I can't see why, and a quick experiment in Python didn't give this ratio.
[0]: https://news.ycombinator.com/item?id=41112688
[1]: https://lemire.me/blog/2017/02/28/how-many-floating-point-nu...