Floating point numbers have X digits of accuracy based on the format. (Using base 10 for simplicity) Let’s say .100 to .999 times 10^x.
But what happens when you have .123x10^3 - .100x10^3. It’s .23? x 10^2 but what is that ? we might prefer to pick 0 but it really could be anything. We can’t even be sure about the 3. If the numbers where .1226 x 10^3 and .1004 x 10^3 that just got rounded the correct number would be .222 x 10^2
You could see it as a "limitation of the format", or you could see it as exchanging one type of mathematical object for another.
For example, CPU integers aren't like mathematical integers. CPU integers wrap around. So CPU integers aren't "really" the integers—CPU integers are actually the ring of integers modulo 2n , with their names changed!
I'm not sure what the name of the ring(?) that contains all the IEEE754 floating-point numbers and their relations is called, but it certainly exists.
And, rather than thinking of yourself as imprecisely computing on the reals, you can think of what you're doing as exact computation on members of the IEEE754 field-object—a field-object where 9999999999999999.0 - 9999999999999998.0 being anything other than 2.0 would be incorrect. Even though the answer, in the reals, is 1.0.