Hacker News new | past | comments | ask | show | jobs | submit login

Right, and I can see why that's a problem in accounting, but why does it matter for scientific computing? I do a fair amount of stuff that could be called scientific computing, and I just use doubles. If I need to keep track of uncertainty or propagate errors, I normally use Gaussians as the representation (not "significant figures" as would be implied by using decimal).



It almost never matters in scientific computing. Doubles give us the equivalent of almost 16 digits of accuracy, and that's more precision than we know any physical constant to. You're right that the world isn't decimal, and switching to decimal encodings actually reduces the effective precision of any computation.


There's a reason they're called the natural numbers. Nature doesn't have to be decimal for decimals to be useful (the question that started this debate), it just has to be rational. Many many many parts of nature are rational, and sometimes we need to deal with them in scientific computing. DNA sequence processing comes to mind.


It matters for numerical methods, which are frequently used in optimizations and simulations. A recent example: https://www.circuitlab.com/blog/2013/07/22/double-double-ple...

That's why many math packages (eg. Mathematica) provide arbitrary precision arithmetic.


Arbitrary precision != decimal. So the question still stands, why would decimal matter?


When the question is stated like this, then the answer is: it is really more convenient for us humans, and the computers would less "lie" to us in significant number of cases. We don't care that much for accumulated partial errors in computation and we just don't expect our inputs to be interpreted immediately "wrong."

Think about it this way: you have the computer capable of billions operations per second, unfathomable capacity, still it lies to you as soon as you enter the number 16.1 in almost any program: it stored some other number, dropping infinite amount of binary decimals! Why you ask, and the answer is "but otherwise it's not in the format native to hardware."

So it should be native to hardware. Just not because of "scientific computing" but more "for the real-life, everybody's everyday computing." We need it for the computers to "just work."


Thanks for the response. So decimal-in-silicon doesn't matter for scientific computing after all. :)


Yes I was the one who questioned "scientific" motive, see the top of the thread! Still, on humane grounds, I claim we really, really need it in hardware. It doesn't matter for "scientific computing" it matters for us humans as long as we use decimal system as the only one we really "understand."

Any program that computes anything a lot has to use hardware based arithmetic to be really fast, still nobody expects that 0.10 cents he writes loses its meaning as soon as it is entered. It is absurd situation.


To be clear... I wasn't arguing for decimal. I was simply saying that "just use doubles" wasn't a valid solution for many scientific problems.


I'm sorry, are you asking for what purpose a scientist would need to multiply by a power of ten? Converting between units in scientific notation? Doing base 10 logarithms? Calculating things in decibels?

Maybe I have misunderstood the question.


The original post that started this thread was saying that chips should support decimal floating point natively in silicon instead of only base-2 floating point. Yes, those are different things: https://en.wikipedia.org/wiki/IEEE754#Basic_formats


I am well aware of the difference between base 2 and base 10.

You asked "why not just use doubles?" And my answer is "because one often multiplies by powers of ten."




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: