Presumably we could actually make decimal floating point computation the default and greatly reduce the amount of surprise. I don't think the performance difference would be an issue for most software.
It would solve more common issues like this though:
> I appreciate that it may be surprising that 0.1 + 0.2 != 0.3 at first, or that many people are not educated about floating point, but I don't understand the people who "understand" floating point and continue to criticize it for the 0.1 + 0.2 "problem."
That's not a calculation that should require a high level of precision.
A lot of real-world data is already in base-10 for obvious reasons, and so an arrangement that lets you add, subtract and multiply those without worrying is worthwhile, even if it can't handle something more exotic.
Maybe we should also add data types to every language that can convert exactly between inches, feet, miles and every other non-base-10 unit?
The argument "we want to look at base-10 in the end so it should be the internal representation" is really weak and ignores basically every other practical aspect.
The way to avoid this issue is to avoid floating-point numbers that have any implicit zeroes (due to exponent) after its significant digits. Basically restrict the range to only values where it's guaranteed that for any x1 and x2 from the range, (x1-x2) produces a non-zero dx such that x2+dx == x1.
The only example off the top of my head that is floating point is C# "decimal", which actually originates from the Decimal data type in OLE Automation object model (which could be seen in VB6, and can still be seen in VBA):
"scale: MUST be the power of 10 by which to divide the 96-bit integer represented by Hi32 * 2^64 + Lo64. The value MUST be in the range of 0 to 28, inclusive."
The reason why it's limited to 28 is because the 96-bit mantissa can represent up to 28 decimal digits exactly. The way it's enforced, any operation that produces a result outside of this range is an overflow error (exception in .NET).
I believe IEEE754 floats have that subtraction/addition guarantee (as long as the hardware doesn't map subnormals to zero). The problem in this case is the input numbers are rounded when they are converted from text/decimal to a float, and so aren't exact.
> I believe IEEE754 floats have that subtraction/addition guarantee (as long as the hardware doesn't map subnormals to zero).
They don't - all 11 bits of the exponent (for float64) are in use, so you can have something like 1e300, and then you can't e.g. add 1 to it and get a different number.
>>> x = 1e100
>>> x
1e+100
>>> y = x + 1
>>> y
1e+100
>>> x - y
0.0
Binary-coded decimal formats have more or less already lived and died (both fixed and floating point). They still have areas of applicability, but this idea is very much not a new one - x86 used to have native BCD support, but it was taken out in amd64 IIRC.