Hacker News new | past | comments | ask | show | jobs | submit login

Yep, floating point numbers are intended for scientific computation on measured values; however many gotchas they hsve when used as intended, there are even MORE if you start using them for numbers that are NOT that. money or any kind of "count" rather than measurement (like, say, a number of bytes).

The trouble is that people end up using them for any non-integer ("real") numbers. It turns out that in modern times scientific calculations with measured values are not necessarily the bulk of calculations in actually written software.

In the 21st century, i don't think there's any good reason for literals like `21.2` to represent IEEE floats instead of a non-integer data representation that works more how people expect for 'exact' numbers (ie, based on decimal instead of binary arithmetic; supporting more significant digits than an IEEE float; so-called "BigDecimal"), at the cost of some performance that you can usually afford.

And yet, in every language I know, even newer ones, a decimal literal represents a float! It's just asking for trouble. IEEE float should be the 'special case' requiring special syntax or instantiation, a literal like `98.3` should get you a BigDecimal!

IEEE floats are a really clever algorithm for a time when memory was much more constrained and scientific computing was a larger portion of the universe of software. But now they ought to be a specialty tool, not the go-to for representing non-integer numbers.




I think you are significantly underestimate the prevalence of floating point calculations, there is a reason why Intel and AMD created all the special simd instructions. Multimedia is a big user for example. You also seriously underestimate the performance cost of using decimal types, we are talking orders of magnitude.


Fair! Good point about multimedia/animation/etc.

There are still a lot of people doing a lot of work in which they hardly ever want a floating point number but end up using it because it's the "obvious" one that happens when you just write `4.2`, and the BigDecimal is cumbersome to use.


I like that idea too. I wonder why Python doesn't use bigdecimals by default. Maybe because it seems to require you to choose a precision?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: