That thread is rather depressing, being mostly just kneejerk and insults. From what I can tell, the actual criticisms are just:
1. "rounding modes and overflow behavior are not addressed", which is incorrect.
2. "Where's exp(), log(), sin(), cos()?", which you can find in dec64_math.c.
3. "There are also 255 representations of almost all representable numbers", which is incorrect.
4. "it will take around FIFTY INSTRUCTIONS TO CHECK IF TWO NUMBERS ARE EQUAL OR NOT" in the slow case, which is probably true but is unlikely to be common given the design of the type. It can undoubtedly be improved with effort.
5. "The exponent bits should be the higher order bits. Otherwise, this type breaks compatibility with existing comparator circuitry", which seems like an odd comment given the type is not normalized.
6. "With both fields being 2's compliment, you're wasting a bit, just to indicate sign", which AFAICT is just false.
7. "there's no fraction support", which I don't understand.
So the only valid criticism I saw in that whole thread is #4, and even that is only a tenth as valid as its author thought it was.
Re 1: it is true that there is a single paragraph "discussing" rounding in https://github.com/douglascrockford/DEC64/blob/master/dec64.... -- which is not much. If there is more, I did not find it, and would genuinely appreciate a pointer! In the meantime, though, alternative rounding methods are missing, and those can be quite important for both scientific and business computations. I guess they could be added. But, a lot of things "could be done". Fact is, they were done so far. In so far, DEC64 is mostly a proposal for people to sit down and work out an actual standard. Perhaps this will happen one day, but it seems so far not so many people are convinced it's worth investing efforts into that (I am also not sure if the author is interested in feedback and collaboration? I see no indication for that anywhere)
Re 2: these are at best toy implementations, at worst dangerous (in the sense that they may provide wildly inaccurate results, due to convergence
Re 3: agreed. Though there is still a gargantuan number of values which have 2 or more representations, and that makes all kinds of comparisons more complicated. Dealing with that efficiently in SW is difficult, and more so in HW. It might be worth it if the advantages outweigh it, but at least I personally don't see it.
Re 4: if it can be "undoubtedly improved with effort", why hasn't it been done in several years? Sure, it may be possible, but I will keep my doubts for the time being :-)
> In the meantime, though, alternative rounding methods are missing, and those can be quite important for both scientific and business computations.
My inexpert understanding is that modifying rounding modes is super niche and poorly supported by most things, so this doesn't strike me as much of a problem. A saner replacement to rounding mode flags would just be to offer different operations for those rare cases they are wanted.
> Dealing with that efficiently in SW is difficult, and more so in HW.
Not really; you never really need to normalize values and not doing so makes basically everything other than comparisons cheaper. I don't see how normalizing around every arithmetic operation would make the hardware any simpler.
> if it can be "undoubtedly improved with effort", why hasn't it been done in several years?
Because this is one guy's project and it hasn't seen much (any?) use.
> So the only valid criticism I saw in that whole thread is #4, and even that is only a tenth as valid as its author thought it was.
Not really here to defend these criticisms, but if #4 is only a tenth as valid as the author thought it was, then does that mean that you know the worst case instruction count to compare two DEC64 numbers is five? Also, we're talking x86 here, how many μops and static instruction bytes are we talking on either side of this? Are all of the branches usually predictable? Can we ever afford to inline something as critical as the comparison operation?
As for the validity of #5, I think this might make more sense in the context of hardware implementations, where a low-order exponent position could increase complexity (requiring adjustments to reuse any of the FPU datapath) AFAIK.
All I meant is that the author of the comment said it would apply in most cases, where even pessimistically it seems it should apply one time in ten. I don't have detailed performance information.
3. Correct there are not 255 possibilities, it’s closer to 50 - I have no paper to work it out on right now. This is because the desire to “not require normalization” is not a good idea.
4. To implement compare quickly you can probably short circuit by first checking that the values are normalized, and normalizing them if not. From a hardware point of view you just added two registers for compare, in addition to the logic for normalizing. For hardware it is probably cheaper to unconditionally normalize.
All math operations will also need to normalize inputs first because otherwise you throw away precision for no reason. For similar reasons all outputs will need to be normalized anyway. Again, given this why would you choose to not just have an implied leading bit?
5. Uh yeah what? If you’re making dedicated hardware you could randomize the bit ordering that’s what making hardware is all about ;)
> Correct there are not 255 possibilities, it’s closer to 50
1 in 10 values have two representations, 1 in 100 have three, etc., since you can only change the exponent when the (normalized) value is 0 mod 10.
> For hardware it is probably cheaper to unconditionally normalize.
Given normalization is going to be infrequent (most values will either just be integers or have a single representation), and that arithmetic is more common than comparisons, I don't see why this would be. Normalizing a binary value is much easier, so you can't carry floating point intuitions over.
> All math operations will also need to normalize inputs first because otherwise you throw away precision for no reason.
No, you just conform the exponents and handle overflow.
1. "rounding modes and overflow behavior are not addressed", which is incorrect.
2. "Where's exp(), log(), sin(), cos()?", which you can find in dec64_math.c.
3. "There are also 255 representations of almost all representable numbers", which is incorrect.
4. "it will take around FIFTY INSTRUCTIONS TO CHECK IF TWO NUMBERS ARE EQUAL OR NOT" in the slow case, which is probably true but is unlikely to be common given the design of the type. It can undoubtedly be improved with effort.
5. "The exponent bits should be the higher order bits. Otherwise, this type breaks compatibility with existing comparator circuitry", which seems like an odd comment given the type is not normalized.
6. "With both fields being 2's compliment, you're wasting a bit, just to indicate sign", which AFAICT is just false.
7. "there's no fraction support", which I don't understand.
So the only valid criticism I saw in that whole thread is #4, and even that is only a tenth as valid as its author thought it was.