Hacker News new | past | comments | ask | show | jobs | submit login

[flagged]



I didn't downvote you, but this isn't a problem with computers. It's a problem with the (mis)use of floats.

Floats are not decimals. That's unfortunately a really, really common misconception, owing in part to poor education. Developers reach for floats to represent decimals without thinking about the precision ramifications.

When you're working with decimals that don't need a lot of precision this doesn't generally come up (and naturally, those are the numbers typically used in textbooks). But when you start doing floating point arithmetic with decimals that require significant precision, things get bizarre very fast.

Unfortunately if a developer isn't expecting it, that's likely to happen in production processing code at a very inopportune time. But the computer is just doing what it's told - we have the tools to support safe and precise arithmetic with decimals that need it. It's a matter of knowing how and when to use floating point.


FWIW, a fixed-precision floating-point decimal type would have the same problem. At some point the spacing between two consecutive floating-point values (ULP [1]) simply becomes more than one, no matter the radix.

[1] https://en.wikipedia.org/wiki/Unit_in_the_last_place


You're probably being downvoted for posting like you're on some other site, moreso than your sentiment that this is just a simple CS 101 thing that people ought to know.

Thing is, a lot of people don't take CS courses, and have to learn this as they go along. More importantly, the naive cases all seem to work fine - it's only when you get to increasing precision / scales that you notice the cracks in the facade, and that's only if you have something that depends on the real accuracy (e.g. real world consequences from being wrong) or if someone bothers to go and check (using some other calculator that gives more precise results).

My own view on it is that it's past bloody time for languages to offer a fully abstracted class of real numbers with correct, arbitrary precision math - obviating the need for the developer to specify integer, float, long, etc. I don't mean that every language should act like this, but ones aimed at business software development, for example, would do well to provide a first-class primary number type that simply covers all of this properly.

Yes, I can understand that the performance will not be ideal in all cases, but the tradeoff in terms of accuracy, starting productivity, and avoiding common problems would probably be worth it for a pretty big subset of working developers.


What is "properly" though? There's many real numbers that don't have finite representation. Arbitrary precision is all well and good, but as long as you're expressing things as binary-mantissa-times-2^x, you aren't going to be able to precisely represent 0.3. You could respond by saying that languages should only have rationals, not reals, but then you lose the ability to apply transcendental functions to your numbers, or to use irrational numbers like pi or e.

Performance is only part of the problem, and what it prevents is more-precise floats (or unums or decimal floats or whatever). The other part of the problem is that we want computers with a finite amount of memory to represent numbers that are mathematically impossible to fit in that memory, so we have to work with approximations. IEEE-754 is a really fast approximator that does a good job of covering the reals with integers at magnitudes that people tend to use, so it's longevity makes sense to me.


Exact real arithmetic is an open research problem (and slow, as well). Arbitrary precision has its own can of worms and is slow, too.


Not really. It's used for over 30 years successfully in all lisps. gmp is not really slow, and for limited precision (2k) there exist even faster libs.


gmp is not exact. It's just arbitrary-precision. There's a very large difference. Exact arithmetic handles numbers like pi with infinite precision. When you use gmp, you pre-select select a constant for pi with a precision known ahead of time. In the real world, 64 bits of pi is more than enough for almost every purpose, so whatever. It's fine. But there's a huge conceptual gap between that and exact arithmetic.


I never said that. For simple, non-symbolic languages gmp is still the best.

Lisp is of course better, optimizing expressions symbolically as far as possible, eg. to rationals, and using bignum and bigint's internally. As exact as possible. perl6 does it too, just 100x slower.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: