The linked post is a bit poorly expressed, but I think there is a good point there: fixed-size binary floating-point numbers are a compromise, and they are a poor compromise for some applications, and difficult to use reliably without knowing about numerical analysis. (For example, suppose you have an array of floating-point numbers and you want to add them up, getting the closest representable approximation to the true sum. This is a very simple problem and ought to have a very simple solution, but with floating-point numbers it does not [1].)
Perhaps it is time for the developers of new programming languages to consider using a different approach to representing approximations to real numbers, for example something like the General Decimal Arithmetic Specification [2], and to relegate fixed-size binary floating-point numbers to a library for use by experts.
There is an analogy with integers: historically, languages like C provided fixed-size binary integers with wrap-around or undefined behaviour on overflow, but with experience we recognise that these are a poor compromise, responsible for many bugs, and suitable only for careful use by experts. Modern languages with arbitrary-precision integers are much easier to write reliable programs in.
Do note that UB on integer overflow is (at least nowadays) more of a compiler wish for optimization reasons than it is technically necessary (your CPU will indeed just wrap around if you don't live in the 80s anymore, but a C++ compiler might have assumed that won't happen for a signed loop index).
Perhaps it is time for the developers of new programming languages to consider using a different approach to representing approximations to real numbers, for example something like the General Decimal Arithmetic Specification [2], and to relegate fixed-size binary floating-point numbers to a library for use by experts.
There is an analogy with integers: historically, languages like C provided fixed-size binary integers with wrap-around or undefined behaviour on overflow, but with experience we recognise that these are a poor compromise, responsible for many bugs, and suitable only for careful use by experts. Modern languages with arbitrary-precision integers are much easier to write reliable programs in.
[1] https://en.wikipedia.org/wiki/Kahan_summation_algorithm [2] http://speleotrove.com/decimal/decarith.html