When the question is stated like this, then the answer is: it is really more convenient for us humans, and the computers would less "lie" to us in significant number of cases. We don't care that much for accumulated partial errors in computation and we just don't expect our inputs to be interpreted immediately "wrong."
Think about it this way: you have the computer capable of billions operations per second, unfathomable capacity, still it lies to you as soon as you enter the number 16.1 in almost any program: it stored some other number, dropping infinite amount of binary decimals! Why you ask, and the answer is "but otherwise it's not in the format native to hardware."
So it should be native to hardware. Just not because of "scientific computing" but more "for the real-life, everybody's everyday computing." We need it for the computers to "just work."
Yes I was the one who questioned "scientific" motive, see the top of the thread! Still, on humane grounds, I claim we really, really need it in hardware. It doesn't matter for "scientific computing" it matters for us humans as long as we use decimal system as the only one we really "understand."
Any program that computes anything a lot has to use hardware based arithmetic to be really fast, still nobody expects that 0.10 cents he writes loses its meaning as soon as it is entered. It is absurd situation.
That's why many math packages (eg. Mathematica) provide arbitrary precision arithmetic.