> Correct there are not 255 possibilities, it’s closer to 50
1 in 10 values have two representations, 1 in 100 have three, etc., since you can only change the exponent when the (normalized) value is 0 mod 10.
> For hardware it is probably cheaper to unconditionally normalize.
Given normalization is going to be infrequent (most values will either just be integers or have a single representation), and that arithmetic is more common than comparisons, I don't see why this would be. Normalizing a binary value is much easier, so you can't carry floating point intuitions over.
> All math operations will also need to normalize inputs first because otherwise you throw away precision for no reason.
No, you just conform the exponents and handle overflow.
1 in 10 values have two representations, 1 in 100 have three, etc., since you can only change the exponent when the (normalized) value is 0 mod 10.
> For hardware it is probably cheaper to unconditionally normalize.
Given normalization is going to be infrequent (most values will either just be integers or have a single representation), and that arithmetic is more common than comparisons, I don't see why this would be. Normalizing a binary value is much easier, so you can't carry floating point intuitions over.
> All math operations will also need to normalize inputs first because otherwise you throw away precision for no reason.
No, you just conform the exponents and handle overflow.