I'd certainly like to have decimal FPU types too, I know IBM did some nice work but the rest of industry mostly ignored it and I think it's a pity.
Still I don't see where in scientific computing anybody would need it by the nature of the problems being solved -- what is decimal in nature? When you calculate with base2 FP you have better "resolution" and partial results in "absolute" sense (not in the "let me see the decimal digits" sense of course). For the same reason, when you make a series of calculations, the error accumulates slower with binary. That's why base 2 FP was used for all this years. When you don't need to calculate money amounts, it is better.
But what are the examples where decimal is more "natural" for scientific computing?
To amplify this a bit, in binary the relative and absolute errors are within a factor of two, but by a factor of up to 10 for decimal. IBM actually used to use base 16 on some hardware but the rel/abs error disparity of factor 16 hurt and it was eventually abandoned.
Still I don't see where in scientific computing anybody would need it by the nature of the problems being solved -- what is decimal in nature? When you calculate with base2 FP you have better "resolution" and partial results in "absolute" sense (not in the "let me see the decimal digits" sense of course). For the same reason, when you make a series of calculations, the error accumulates slower with binary. That's why base 2 FP was used for all this years. When you don't need to calculate money amounts, it is better.
But what are the examples where decimal is more "natural" for scientific computing?