Right, and I can see why that's a problem in accounting, but why does it matter for scientific computing? I do a fair amount of stuff that could be called scientific computing, and I just use doubles. If I need to keep track of uncertainty or propagate errors, I normally use Gaussians as the representation (not "significant figures" as would be implied by using decimal).
It almost never matters in scientific computing. Doubles give us the equivalent of almost 16 digits of accuracy, and that's more precision than we know any physical constant to. You're right that the world isn't decimal, and switching to decimal encodings actually reduces the effective precision of any computation.
There's a reason they're called the natural numbers. Nature doesn't have to be decimal for decimals to be useful (the question that started this debate), it just has to be rational. Many many many parts of nature are rational, and sometimes we need to deal with them in scientific computing. DNA sequence processing comes to mind.
When the question is stated like this, then the answer is: it is really more convenient for us humans, and the computers would less "lie" to us in significant number of cases. We don't care that much for accumulated partial errors in computation and we just don't expect our inputs to be interpreted immediately "wrong."
Think about it this way: you have the computer capable of billions operations per second, unfathomable capacity, still it lies to you as soon as you enter the number 16.1 in almost any program: it stored some other number, dropping infinite amount of binary decimals! Why you ask, and the answer is "but otherwise it's not in the format native to hardware."
So it should be native to hardware. Just not because of "scientific computing" but more "for the real-life, everybody's everyday computing." We need it for the computers to "just work."
Yes I was the one who questioned "scientific" motive, see the top of the thread! Still, on humane grounds, I claim we really, really need it in hardware. It doesn't matter for "scientific computing" it matters for us humans as long as we use decimal system as the only one we really "understand."
Any program that computes anything a lot has to use hardware based arithmetic to be really fast, still nobody expects that 0.10 cents he writes loses its meaning as soon as it is entered. It is absurd situation.
I'm sorry, are you asking for what purpose a scientist would need to multiply by a power of ten? Converting between units in scientific notation? Doing base 10 logarithms? Calculating things in decibels?
The original post that started this thread was saying that chips should support decimal floating point natively in silicon instead of only base-2 floating point. Yes, those are different things: https://en.wikipedia.org/wiki/IEEE754#Basic_formats