What boggles me is that people there are suggesting using a float to represent an arbitrary precision number.
Not only will that not solve their problem indefinitely (no better than using an unsigned number of the same size), but when it does fail due to floating point inaccuracies, it will do so in subtle ways.
Not only will that not solve their problem indefinitely (no better than using an unsigned number of the same size), but when it does fail due to floating point inaccuracies, it will do so in subtle ways.