Fascinating discussion, so there are a couple of threads and they can be summed into precision arguments and range arguments. I confess I'm friends with Mike Cowlishaw (the guy behind spelotrove.com) and he's influenced my thinking on this quite a bit.
So precision arguments generally come under the heading of how many significant digits do you need, and if its less than 15 or so you're fine using a binary representation. If its more than 15 you can still use binary but its not clear to me that it's a win.
The second is in range. So if you're simulating all of the CO2 molecules in the atmosphere, and you actually want to know how many there are, and you want to work in the precise values of the rations of N2, NO2, H2, O2, CO2, etc in the atmosphere then, as I understand it, you're stuck approximating. (For context I was asking the a scientist from the Sandia National Laboratory about their ability to simulate nuclear explosions at the particle level and wondering if the climate scientists could do the same for simulating an atmosphere of molecules, or better atoms, or even better, particles. ) that is a problem where there is a lot of dynamic range in your numbers adding 1 to .5866115*10^20 doesn't really change the value because the range breaks down.
And yes, you can build arbitrary precision arithmetic libraries (I built one for Java way back in the old days) but if your working with numbers at that scale then it gets painful without hardware support of some kind.
In my day to day use I find that imprecision in binary screws up navigation in my robots as they are trying to figure out where they are, but repeated re-location helps keep the error from accumulating. And yes, its a valid argument that dead reckoning is for sissies but its very helpful when you have limited sensor budgets :-)
So precision arguments generally come under the heading of how many significant digits do you need, and if its less than 15 or so you're fine using a binary representation. If its more than 15 you can still use binary but its not clear to me that it's a win.
The second is in range. So if you're simulating all of the CO2 molecules in the atmosphere, and you actually want to know how many there are, and you want to work in the precise values of the rations of N2, NO2, H2, O2, CO2, etc in the atmosphere then, as I understand it, you're stuck approximating. (For context I was asking the a scientist from the Sandia National Laboratory about their ability to simulate nuclear explosions at the particle level and wondering if the climate scientists could do the same for simulating an atmosphere of molecules, or better atoms, or even better, particles. ) that is a problem where there is a lot of dynamic range in your numbers adding 1 to .5866115*10^20 doesn't really change the value because the range breaks down.
And yes, you can build arbitrary precision arithmetic libraries (I built one for Java way back in the old days) but if your working with numbers at that scale then it gets painful without hardware support of some kind.
In my day to day use I find that imprecision in binary screws up navigation in my robots as they are trying to figure out where they are, but repeated re-location helps keep the error from accumulating. And yes, its a valid argument that dead reckoning is for sissies but its very helpful when you have limited sensor budgets :-)