> For servers, really? I don't get it, they could just use RISC-V.
x86 is a mature established architecture, RISC-V is not. Indeed I think the privledged mode architecture is still under development (i.e. what you required to run an actual OS rather than just embedded applications). I'm sure there are many other features x86 possesses than RISC-V lacks too.
I would have thought commercial RISC-V will be IoT/Embedded applications first. Fewer features required, lower costs (less to loose if it doesn't work out) and fairly static software supplied by a single or small group of vendors.
Ah yes, because everyone runs servers based on RISC-V and so -- oh wait, they don't.
I'm as big a lover of RISC-V as anyone (and am excited about the possibility of having a completely open CPU which you can load into a free FPGA). But it's delusional to think that x86 is "legacy" at this point.
x86 is dominant in the server marketplace because Intel makes the best server chips for the vast majority of server use cases. Full stop.
If somebody else comes out with a chip that's substantially better than x86 for servers, I expect the industry to transition quite quickly, just like it abandoned SPARC in the 90's.
The industry doesn't produce readily available SPARC systems, except for NEC and that European space chip consortium, plus including what's left of Sun inside Larry's dungeon, but SPARC is still evolving and is in a similar space as POWER. There are certain workloads that benefit very much from POWER and SPARC, but it's not your general purpose system that works well in a common denominator kinda way. There just aren't enough SPARC and POWER systems around to be economically viable as a x86 alternative, but technologically they're good choices for many scenarios.
The issue is that mfg of cpus at scale is very costly, and short of a large scale, the per-unit cost to cover tooling is higher... lower margins, less profit, higher cost to the customers. Per-Unit cost, and relative throughput is why Intel rules the roost for the most part, even in server.
I've looked at some of the costs of ARM servers, and they're way more expensive than they should be (compared to the cost going into phones)... I would think that may eventually become competitive... though Intel has a bit of room to lower pricing and still make money to compete.
But the chip you get in a smart phone is not what you get in a server. For one, I/O is very different and it includes things like 40Gb network silicon.
You mean you want x86 to only have a "legacy software user [sic] case." Your claim is not backed up by reality whatsoever--just ask every Windows user.
Well, Windows has existed for other platforms, especially in the server world. And for end users, you have things like Windows RT that are still supported.
The serious problem of emulating x86/x86-64 is that it has a strong memory model, while most other platforms (ARM, PowerPC, Itanium, Alpha) have a weak memory model. Only SPARC (SPARC TSO) and zSeries have a similarly strong memory model as x86/x86-64.
Exactly; binary compatibility is a hard issue, and it's especially non-viable in systems with no awesome unified package managers a la Linux.
Case in point: despite an ARM processor, running conventional desktop programs on a Raspberry Pi is mostly fine (barring performance issues). Yes, all operating systems' build stack can emit ARM binaries, but it's useless unless the developer supports it well (not gonna happen), or if there is really nice automation (like Debian).
edit: unless you're trying to say that the binary compatibility wasn't a pain in the butt. they solved it by (IIRC, not a long time OS X user), by bundling two arch binaries together for a bit. Not viable if you have 10 different architectures.
Around OSX 10.4, there was a transition from PowerPC to x86 architecture. It was handled by bundling a JIT interpreter called rosetta [0]. It ran at pretty much the same speed as if you ran it on the original architecture.
They had already done something similar in their first arch transition, 68k => PowerPC [1].
Point is, binary compatible is possible. Far from easy, sure, but it's been done before. The question is, did CPU evolve so much it became impossible to translate from one arch to another ?
>The question is, did CPU evolve so much it became impossible to translate from one arch to another ?
I'd say software got a bit more complex compared to then.
Furthermore, that works if you have controlled hardware (which means easier testing and less edge cases to worry about) and a single transition to worry about (from A to B, not {A,B,C,D,E} -> {A,B,C,D,E}).
Can you imagine how insane it will be if Windows shipped a compatibility layer that translates x86 software to ARM, to RISC-V, to MIPS, to whatever? You need to test compatibility for not one but 3 architectures. No way people are gonna do that; the RoI is almost nonexistent.
So the only solution is to recompile, which is annoying if you don't have the great software infrastructure to do so.
Its not nearly as hard as you think to translate between isas. Some things won't directly translate, like say the matrix multiply resister in some mips super computers, but you can easily just swap that out for a more mundane approach.
I'm hoping that this is really good news for Zen. Because you're right, the server market is a lot less tied to x86 than the desktop market is.
So why would they sign this deal when it would probably be easier to use RISC-V or ARM or MIPS or ...? Perhaps AMD gave them a private demo of Zen and they realized that it was better than all of their other alternatives, even including other architectures.
But if they had their hearts set on x86, unless they could get Intel to play ball (highly unlikely), AMD was their only option even if Zen sucks.
I agree and think the only way they made this deal is if AMD showed that Zen is better than any of the other alternatives. That probably isn't saying much since most of the alternatives are pretty bad, but they probably don't have another Bulldozer on their hands either.
Different Chines companies are trying every possible path to a "homegown" server processor and x86 is only one of them. I'm sure somebody's trying RISC-V.
Google is at least testing OpenPower in their data centers. I don't know how serious they are, but people are starting to explore x86 alternative more.
I highly doubt they care about the specific ISA in this case. The simple fact is, at this point, the only players with proven high-IPC, high-performance CPU designs are Intel(x86), AMD(x86), and IBM(POWER/POWERPC), and that is highly unlikely to change in the near future. IBM already has a licensing deal for POWER in China, and has been moving towards that with OpenPOWER for a long time. Intel would never contemplate such a thing, but AMD is desperate enough at this juncture to try it, even though it could cannibalize their core business in the future.
From what I've seen of Cavium, I wouldn't describe them as general-purpose CPUs, though they are nice for their own niche. I don't think anyone in the west has gotten hands on anything except press releases of ShenWei systems, so it is unproven as far as I can tell. If it is so great why was the Chinese National Supercomputer Center trying to get their hands on Xeons so badly to upgrade Tianhe? :P
Edit: Along the lines of Cavium, you might want to check out Tilera. They aren't exactly general-purpose either but they are more so than Octeons, and have respectable single-thread performance from what I've seen.
It's a multi-core MIPS64 processor with PCI, USB, and do on. Remember that MIPS has been used in everything from embedded to SGI Origin supercomputers. Quite general purpose. Cavium's addition is SOC, lower power, and accelerators.
Regarding ShenWei, it was used in supercomputers. It's probably real. What jumped out is that it's probably stolen IP from Alpha CPU and had high watts. So, not as power-efficient or legal as they'd like was my guesss.
Re Tilera
I read the MIT RAW Workstation papers where all that began. ;) Yeah, it's pretty cool but that's the one that's limited purpose. It's like an overlap between vector processors, FPGA's, and multicores. I dont know who all uses them but I did find a 100Gbps network tap and NIDS that used 3 Tileras for its muscle.
I knew of a company that was testing Tilera for 4G basestation/RNC equipment, not sure if that ever panned out(I think they use Cavium + FPGAs now :P). I think Tilera is basically dead as a standalone product now after all the acquisitions, they'll probably be folded into Mellanox's interconnect acceleration wasteland and never be heard of again.
Sad as there was so much hype back in the RAW days of where it would be applied. Then, it became a commercial activity of a stingy company that didn't foster a strong ecosystem. That effectively sealed its fate given DSP's, FPGA's, and GPU's were killing everything in their path if we're just talking energy/performance/price. Your prediction might come true.
Idk about x86 itself minus legacy as it's defined by legal. However, Intel tried to clean-slate it at least four times. The first, i432, was revolutionary, forward-thinking, and overdid the hell out of the hardware side. Hundreds of man-years lost due to low, raw performance. Market only cared about the latter so rejected it.
Second was for BiiN parallel and high-availability system. Its i960 was a brilliant combo of RISC, HA, and security aspects of i432. Rejected by market due to no backward compatibility although used in F-35, some storage controllers, and so on. Still available for embedded but not the good version in the links. :( Especially see the manual and part with object mechanisms for containment/addressing. Cost them and Siemens a billion dollars.
Third time, in parallel with i960, was i860 RISC core for high-performance supercomputing and embedded systems. Had performance issues and just wasn't popular in general. Another loss.
Itanium combined RISC, VLIW, PA-RISC style security, and reliability features. EPIC/VLIW probably what did it in the most unfortunately because reliability, speed, and security combo were good. Link below is security features as they alone justified in imho over x86 and you probably never saw them in comparisons. Just dollars, GHz, and GFLOPS as if that's all that matters. Used in appliances, supercomputers, and workstations but going away probably at a cost of hundreds of millions of dollars.
So, Intel has tried to give us something better than x86 four times already at a cost of billions of dollars. To their credit, they tried and produced a few great designs with some flaws for sure but great in key ways. Market rejected them in favor of raw price/performance and backward compatibility with shit software. So, we're stuck with that given Itanium will go into legacy mode, VIA is loosing money on their x86 business, Transmeta was bought, and Loongons w/ x86 emulation are shaky investment.
Gabriel's Worse is Better is in full effect here...
This is mostly about custom SoC's, which means like ARM SoC's it's a couple CPU cores with a few special purpose chips all in the same package. Think networking, databases, ai, image processing...
x86 has a user case: legacy software.