Hacker News new | past | comments | ask | show | jobs | submit login

How often have you needed a 64-bit integer where a 56-bit one wouldn't do? I rather like the idea of having only one numeric type.



As a succinct example, 64-bits is 584 years of nanoseconds. 56-bits is only 2 years of nanoseconds.

The problem is that many extant APIs return 64-bit integers, so if your language only has 56-bit integers you are creating a bug/vulnerability every time you want to talk to the outside world.

e.g. sqlite has sqlite3_column_int64() to get a value from a result set. How do you use that safely if your language can only do 56-bit ints? Ugly.

Remember the "Twitter Apocalypse" when Twitter changed their tweet IDs to be bigger than 56-bits and all the JS programmers had to switch to using strings?

Also, bitboards. Bitboards are cool: https://chessprogramming.wikispaces.com/Bitboards.

EDIT: I also reject the premise that just because there's an ugly hack available, we can get rid of useful language features. Am I working for the language or is the language working for me?


The question is, why do APIs return 64-bit values? In general it's not because they need all 64 bits, it's because the 64-bit integer type is convenient for them. This might make Crockford's proposal completely impractical but it doesn't invalidate the argument that led to it.

I reject the nanosecond example because it's completely arbitrary. 64 bits of picoseconds would only cover 0.584 years so should we claim 64 bits isn't enough? Wouldn't 2000 years of microseconds in 56 bits be good enough?

I'll give you credit for bitboards though, that's one I hadn't considered.


Still, hardware is inherently base-2. What I'd like to see is hardware assisted adaptive precision predicates and compilers/runtimes that make proper use of modern instructions.

I have never ever thougt that float/double/decimal was too much choice.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: