Hacker News new | past | comments | ask | show | jobs | submit login

it's also a range-storage trade-off. if you use two fixed width integers to represent a rational, the minimum and maximum values are the same as that of the integer type. floating point gives a far wider range for the same number of bits.



I'm sure there's some subtlety I'm missing, but isn't it actually the same trade-off? A 64-bit float can only represent 52-bit integers exactly. Anything above that, and you don't even have integer-level precision on the number anymore... This sliding scale of precision is exactly why floats are terrible at the kinds of operations that would cause you to use a rational instead.


> I'm sure there's some subtlety I'm missing, but isn't it actually the same trade-off?

not exactly, unless you consider space efficiency to be an aspect of performance (which is certainly reasonable). a naive implementation of rationals using two int32_t's only covers the range of a single int32_t, despite using as many bits as the double. it's also a trade-off between range and consistent precision, of course.

this certainly isn't some deep insight into number representation, just a quick point for the benefit of people who haven't thought much about rational data types before.


Once you care about that level of performance, you can surely optimise your representation to have a greater range (use more bits for the numerator) or greater precision (more bits for the denominator) or some boutique solution like using three integers to store the number a + b/c.

You can store slightly fewer numbers with rationals, because it's hard to avoid having a representation for both 2/4 and 3/6. But the loss of range or precision due to that is pretty small.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: