There's no such thing as a "binary" or "decimal" number.
In the real world, there are natural numbers, integers, rationals and real numbers.
Computer languages are designed with types that mimic this real-world number stack. Low-level binary implementation details don't leak unless you're overflowing or using bit operations.
What you're really complaining about is the fact that rationals aren't a first-class type in any popular language. With that I agree, it's a shame that corners were cut we three number types instead of four.
On the contrary, decimal number types (such as the one in C# that I am most familiar with) are addressing another kind of number that occurs very frequently within "the real world" and is inherently in base ten. Namely, numbers with a fixed number of digits past the decimal point and with specific rounding rules. These are incredibly common in finance.
The whole point is to support the kind of accounting that happens in real business, which does not do things with rational numbers. One third of your bill does not charge you one third of a penny, these aren't rational numbers but the operations on them involve rounding.
Dec64 does not reflect what happens in real business any more than IEEE floats does. "There is no reals" applies to currency too, but in a stronger form: in currency there are no fractions either. Translation: in real life, no one can give you a fraction of a cent. So the programmer has to make a decision about what happens to those fractions when you give 1/3 off.
Every newbie programmer tries to avoid thing about this by using IEEE floats. They discover, usually years later after some anal auditor has come down on them like a ton of bricks for the hundredth time because the dropped low order bits from 1/3 of a cent hit that 1 in a billion case and effected a significant digit, and then it finally dawns that 1 in a billion isn't really 1 in a billion because thousands of such calculations get combined into a single profit and loss figure that is out by 2 cents and chasing that 2 cents for 2 weeks really only to discover it was caused by a computer can't compute, really, really pisses off the auditor, that you realise if aren't thinking about those fractions of a cent as hard as a C programmer focuses on malloc(), they will have gone to whoop whoop in half the code you have written. You will have nightmares about divide signs for the rest of your life. Crockford seems to think Dec64 allows the programmer to avoid thinking about the problem. He is just as wrong as every newbie programmer.
There is only one safe format for currency that accurately reflects this reality, that forces you to accept that you must think about those fractions of a cent. It is the humble integer. You put in it the lowest denomination: cents in the case of the USA. And then when you write total = subtotal / 3 * 4, and you test it, your error stands out like dogs balls, and you fix it.
Tangent: in the real world, there are no real numbers. Whether or not there are arbitrary rational numbers is something of an open question.
"Binary number" as used by grandparent really refers to dyadic rationals (https://en.wikipedia.org/wiki/Dyadic_rational), which are a perfectly well-defined dense subset of the rationals. Similarly, "decimal number" is really "terminating decimal expansion" (or whatever you want to call the decimal analogue of dyadic rational), which is again a well-defined dense subset of the rationals. This is a perfectly valid mathematical distinction; the numbers that people work with day-to-day are much more frequently the latter.
Hrmmm. I thought something was fishy with the statement: "Tangent: in the real world, there are no real numbers". The reals are defined as the set of all rational and irrational numbers on the number line. See https://en.wikipedia.org/wiki/Real_number for a reasonably pedagogical discussion.
There are reals, they do exist. In the "real" world (as poorly defined as that is).
The issue is, fundamentally, what programming languages call "real numbers" are not real numbers. They are an approximation to a subset of the reals. This approximation has holes, and the implementations work to some degree to define regions of applicability, and regions of inapplicability. Usually people get hung up or caught in the various traps (inadvertently) added to the specs for "Reals".
Its generally better to say "floating point numbers" than "Reals" in CS, simply because floating point is that subset of the Reals that we are accustomed to using.
I definitely agree with the comment on rationals. I am a fan of Perl6, Julia and other languages ability to use rationals as first class number types.
Sadly, as with other good ideas that require people alter their code/libraries, I fear this will not catch on due to implicit momentum of existing systems.
In a very reasonable sense, the real numbers do not exist in the real world. Almost all real numbers are non-computable, so under apparently reasonable assumptions about what experiments you can conduct, there is no experiment you can do that will produce a measurement with the value of most real numbers.
From this, it’s fairly non-controvesial to say that only the computable reals exist; these are a tiny (measure-zero) subset of the reals.
If you go further and assume a fundamentally discrete universe (much more controvesial), then all you can really measure are integers.
For Floats in particular, binary implementation details leak all the time due to rounding. A number like 0.0625 can be represented in binary exactly with only 4 bits, but a number like 0.1 can only be represented approximately even when using 64 bits.
This could be solved with a rational-style data type, but I consider the fact that real-style data types don't capture that to also be the implementation leaking.
Rationals are not generally useful for scientific computation because eventually you need to start invoking the relatively costly euclidean algorithm to do basic operations, the compute time is no longer constant. Also lots of operations take square roots anyways, so you wind up in trouble that way.
Yeah, you're right, there's a good historic reason for this current mess. Computing originated in scientific computational math problems, so the algorithms and data types are biased for that.
Javascript programmers don't need a number type designed for accurately computing trig and square roots and representing pi, but they got one anyways.
XLISP (and therefore XLISP-STAT) had rational numbers[1] and came out in 1988, R2RS[2] introduced rational numbers in 1985. The Lisp Machines Lisps had rational numbers. There were a lot of people who had rational numbers in the 1980s and the early 1990s.
In the real world, there are natural numbers, integers, rationals and real numbers.
Computer languages are designed with types that mimic this real-world number stack. Low-level binary implementation details don't leak unless you're overflowing or using bit operations.
What you're really complaining about is the fact that rationals aren't a first-class type in any popular language. With that I agree, it's a shame that corners were cut we three number types instead of four.