People get upset that floating point can’t represent all infinite number of real numbers exactly - I can’t understand how they think that’s going to be possible in a finite 64 bits.
To hit the point home a little harder: you can easily iterate through the entire representable set of float32 on a modern machine within seconds. I've encountered many engineers who don't quite get that.
"You can rent a Skylake chip on Google Cloud that'll perform 1.6 trillion 64 bit operations per second for $0.96/hr preemptively. That's enough to run one instruction over a 64 bit address space exhaustively over 120 days, or for ~$2800"
It might not make economic sence to actually make this happen for any realistic test, but it's interesting that it might actually be feasible to do it on any kind of human timescale...
Maybe, but I'd rather a test suite that's designed to test hardware, rather than overloading some code's unit tests.
I think most unit tests are best served by testing key values- e.g. values before and after any intended behavior change, values that represent min/max possible values, values indicative of typical use.
The unit test can serve as documentation of what the code is intended to do, and meaninglessly invoking every unit test over the range of floats obscures that.
There are certainly cases where all values should be tested, but I don't think that's all cases.
Or on any computer at all, even an “infinite” (at least unbounded) computer like a Turing machine, considering that almost all real numbers are not computable.
Well, you don't need to represent all the real numbers. You can get quite far with just rationals or algebraic numbers, although you'll have trouble with exponentials and trignometry. And computable numbers are basically superior to any other number system for computation.
You of course need an unbounded but finite amount of space to store these numbers, which is perfectly fine.
> And computable numbers are basically superior to any other number system for computation.
I don't think that's really quite true. The point of FP is that you don't get any wierd statefulness in your compute complexity as values accumulate, every operation basically has O(1) compute time where N is the number of previous operations you've done. For rationals and algebraics that isn't the case.