Hacker News new | past | comments | ask | show | jobs | submit login

As far as I understand, mathematically von Neumann's computers simply use some specific algebra that is decidedly different from naïve arithmetic. It is actually very useful in certain applications, such as cryptography, that rely on its algebraic properties. It may or may not be good approximation for some numeric problems, but it is generally faster then working out elaborate mathematical structures.

But I'm only an ignorant student, and I wonder if these solutions are really significantly more effective than implementing the number tower or switching to bignums? What are the best practices and the consensus among professionals that have to deal with inadequacies of floating-point number in real-world numeric applications?




I'm nowhere near as mathematically savvy as RoG, but I do have some practical experience from dealing with game physics simulations. A classic problem with naive collision code is objects popping through each other occasionally, and a common 'fix' is to switch from float to double precision. This reduces the frequency of the errors, but that's seldom good enough.

Plane intersections can be performed completely accurately using bignum code, since there's no irrational numbers to creep in, but the performance penalty is far too severe for everyday use in games (at least it was a decade ago when I was last in the industry). Instead, the best approach was to carefully re-design the algorithms so that they're tolerant to small errors. For example by applying a repulsion force based on how deep inside a surface a polygon is, rather than trying to capture (and possibly missing) the exact point of an intersection, or adding a small 'thickness' to all polygons.

I've found this to be a useful approach in all my subsequent work in both graphics and statistical analysis. I start off using floats for all my real number storage, and if I see precision-related problems, I have a long hard think about how to redesign the algorithm to be more robust, rather than just automatically throwing more bits at it.


Bignums or other constructs are good answers sometimes, and not others. They can help you avoid various bits of piddling about, but at the expense of having an entirely different structure that subsequent maintainers might not be familiar with.

On the other hand, the standard floating point system has been in sue for a long time, has well-developed libraries, and a large body of material devoted to understanding when you need to be careful, and when you don't.

In short, calculations of this type need understanding, either of the floating point domain or of an alternative domain, but certainly of the existence of the issues and problems.


First of all, the problems with fixnums vs bignums are very different from the problems with floating point precision. Fixnums are a slightly different mathematical structure from bignums, but they're both commutative groups, and are identical for most inputs. Floating point arithmetic can basically be treated as non-deterministic, which is an entirely different kettle of fish (RoG can probably elaborate better than me how this characterization is and is not valid).

If you're using a naive algorithm for numeric computing, you can get some really atrocious numerical stability issues. If you happen to have exact representations of your numbers, precision isn't an issue, but it's not common that exact representations are even possible. A lot of common numerical operations result in irrationals, including all root, logarithmic, exponential, and trigonometric functions, and that would take an infinite amount of space to represent faithfully. So really, you're always stuck with deciding how much is "enough" for whatever you're doing.

The amount of space that is "enough" for a naive algorithm varies wildly depending on the particular naive algorithm, and some of them are disgustingly high. So, in order to have your naive algorithm output an acceptable answer, you've have to do a lot of numerical stability analysis. But, a similar amount of numerical stability analysis can usually give you a different algorithm that can get "enough" precision with the standard sizes of floats. Using standard floats gives all sorts of other nice benefits like being supported in modern languages, better speed and space performance, and hardware acceleration.

Additionally, most of the common pitfalls of floats are well-known by now, and have well-known solutions, and there are a number of variants and libraries for most 'workhorse' algorithms that trade between speed, space, and numerical stability. It comes down to being another case of knowing your tools.

(PS--Numerical stability doesn't have to do with von Neumann architecture, only with representation of numeric values)


"A lot of common numerical operations result in irrationals, including all root, logarithmic, exponential, and trigonometric functions, and that would take an infinite amount of space to represent faithfully."

Actually symbolic representation can help here - not just to represent the values, but help the final precision. As a trivial example 2/sqrt(2) can be reduced to sqrt(2) with no loss of precision from the division.

This is a lot of hassle though, and not a perfect solution, but it helps solve rounding errors from irrational constants or functions... in a sense some of your intermediary values have infinite precision (but in reality you removed the operation with a clever optimisation/precision conserving trick).


> (PS--Numerical stability doesn't have to do with von Neumann architecture, only with representation of numeric values)

Which you may view as part of the von Neumann architecture. But only if you compare it to something radically different, like analogue computers.


In a related topic. It is surprising that many business applications use floating values to represent currency. That is very wrong, because it creates rounding errors that accumulate. A common symptom is if you have an invoice where the total cannot be calculated from the detail using a regular hand held calculator.

How should you handle currency? At least in Java there is a BigDecimal that should be used instead. Look at the divide() method to see how you can have different rounding behaviors. As an alternative, some people use integers with a convention that there is a decimal point in the nth position. Just remember that floating point shouldn't be used for currency.


> How should you handle currency?

Cents, or fractions thereof, fit into an integer, unless you have to do currency conversion.

If you have to do that, you hire a programmer who has studied numerical analysis. It's a fun course, but very hard. I remember studying with a textbook the professor himself wrote and not being able to find much information online. This has changed quite a bit in the past decade or so. Nowadays, you can find useful stuff like this online:

http://en.wikibooks.org/wiki/Real_Analysis http://floating-point-gui.de/ http://docs.sun.com/source/806-3568/ncg_goldberg.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: