Hacker News new | past | comments | ask | show | jobs | submit login

This might well be a dumb question, but why don't we have 128 bit processors?

No advantage? Crazy expense?




At some point, you need to ask what you mean by 128 bit. When people talk about an 8 bit, 16 bit, 32 bit, or 64 bit processor, they are actually generally conflating two or more things. There's the size of general purpose register, the size of the data bus (how much you can load from memory in a single transfer), the size of the address bus (how many lines you have for addressing RAM), and the size of pointers. In many machines, these have been the same, though for example, 8 bit processors frequently had 14 or 16 bit addresses and busses so they could access up to 16 or 64k of memory; but there's also, for example, the 68008 with 32 bit registers, a 16 bit address bus, and an 8 bit data bus.

So, when people talk about 32 or 64 bits, they generally mean two things: the size of general purpose registers, and the size of addresses.

There's basically no need for addresses beyond 64 bits, at least for quite some time. With 64 bits, you can address 16,384 petabytes (16 exabytes) in a single process. Since the biggest single machines I can find these days support a maximum of 4 TB of RAM (if you filled it with 32 GB DIMMs that aren't yet available), we have a long way to go before you will need more than 64 bits of address space.

Furthermore, increasing address size can hurt performance. If your pointers are all 128 bit, they take up twice the space as 64 bit pointers. There have already been plenty of workloads that show a reduction in performance when ported to 64 bit machines, just because the 64 bit pointers fill up so much valuable cache space. In fact, for this reason, Linux even has support for the x32 ABI, which uses an x86-64 processor in 64 bit mode but only uses 32 bit pointers, so they can take advantage of extra registers available to x86-64 without paying the price for the larger pointers. https://en.wikipedia.org/wiki/X32_ABI

So, there's no benefit to 128 bit addresses and lots of potential downside, so it's not going to happen for quite some time. How about for data, though?

Well, most software doesn't really need to work with integers or floating point numbers larger than 64 bits, anyhow. For lots of applications, 64 or even 32 bits is sufficient. Public key crypto can frequently take advantage of large integers, though it generally needs even bigger integers, like 2048 bits, so you generally have to do bignum arithmetic anyhow.

Lots of the gains that you get from working with larger types come from working on vectors of smaller types. But for those purposes, chips have had 128 bit registers for quite some time. SSE, introduced in 1999, included 128 bit vector registers, which could be treated as 4 32 bit integers (AltiVec on PowerPC had introduced the same idea a few years earlier; the idea of SIMD has been around in supercomputers for many years). Later extensions like SSE2 expanded their use to allow you to treat them as two 64 bit floats, two 64 bit integers, 8 16 bit shorts, and 16 8-bit bytes.

So, for the only use case for which it's particularly valuable, working on vectors in aggregate, we've had 128 bit registers for quite some time. We've had 256 bit registers for a couple of years now in the form of AVX. Now this promises to expand those to 512 bits. There's no good reason to expand your addresses in the same way; at that point, you're just wasting space.


Up voted. Although 16 exabytes is less overhead if you are memory mapping persistent storage rather than just RAM which makes increasing sense with SSDs. 64bit addressing is still plenty for most scenarios for some time to come though even if this approach is taken.


There's no particular advantage to having 128-bit integers or pointers. 128-bit or larger SIMD has existed for 15 years or so.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: