That doesn't seem dumb at all. Making BCD the default would mean floats use 17% more space for the same precision. That might seem like a small loss, but it's also for an incredibly small gain. Programmers would still have to be aware that testing two decimals for exact equality is dangerous. I don't see the problem with 0.1 + 0.2 = 0.300000000000001 if you aren't testing floats for direct equality.