I think you are significantly underestimate the prevalence of floating point calculations, there is a reason why Intel and AMD created all the special simd instructions. Multimedia is a big user for example. You also seriously underestimate the performance cost of using decimal types, we are talking orders of magnitude.
There are still a lot of people doing a lot of work in which they hardly ever want a floating point number but end up using it because it's the "obvious" one that happens when you just write `4.2`, and the BigDecimal is cumbersome to use.