A lot of confusion in the top level comments so far. This is about Intel's foundry business - IE, them using their fabs to make chips for other people to other people's designs. The foundry business absolutely needs to support building SoCs (system on chips) containing ARM cores.
That says nothing about the competition between ARM and Intel in the server or pc CPU markets. It's not about Intel's future CPU designs.
(Some people eg sitkack made the same point already in replies, just think it needs pulling to the top level)
Intel's Foundry business has long played second fiddle to the CPU business, but it seems like recently they've gotten serious about it due to the weakness of the CPU business at present. And many big SoC vendors would like an alternative to TSMC, which has become rather dominant.
I wonder if dual-ISA processors (x86+ARM) might become a reality as a result of this sometime in the near future --- imagine a CPU that can boot in DOS-compatible 16-bit realmode, transition to 32 and then 64-bit x86 protected mode, and from there, launch ARM 16/32/64-bit VMs (more like segments) to run at full native speed. As pointed out in a recent discussion here, the frontend is really the biggest difference between modern ISAs, and all instructions get broken down into uops for the backend to execute.
Thirty years ago there was the PowerPC 615 which included an x86 core. It was even pin-compatible with the Intel Pentium:
”The "PowerPC 615" is a PowerPC processor announced by IBM in 1994, but which never reached mass production. Its main feature was to incorporate an x86 core on die, thus making the processor able to natively process both PowerPC and x86 instructions.[37] An operating system running on PowerPC 615 could either choose to execute 32-bit or 64-bit PowerPC instructions, 32-bit x86 instructions or a mix of three. Mixing instructions would involve a context switch in the CPU with a small overhead. The only operating systems that supported the 615 were Minix and a special development version of OS/2.”
”was 330 mm2 large and manufactured by IBM on a 0.35 μm process. It was pin compatible with Intel's Pentium processors and comparable in speed. The processor was introduced only as a prototype and the program was killed in part by the fact that Microsoft never supported the processor.”
AMD originally intended Ryzen to have both x86 and ARM frontends, but the ARM frontend was cancelled after what we assume was the buyer[0] cancelled their order.
Dropping ARM usermode in the middle of an x86 host[1] would be particularly weird to build architecturally, though. You'd need x86 to correctly retain ARM register state and restore it, etc. The ARM side would also need architectural extensions to generate x86-side interrupts and syscalls for paravirtualization. You'd also need to be able to handle both AArch32 and AArch64 guests on an x86 host - and hopefully that host is in long mode, right!?
[0] There was speculation it might have been Amazon?
[1] "Host" in the most generic case of "software that controls the live-ness of other software": i.e. kernel, hypervisor, and/or emulator
There’s a number of CPU features that make running translated x86 code much faster than it otherwise would be, like “Alternate Floating Point Mode” that happens to exactly match the x86’s floating point behavior.
Or you could go the other way around. Boot with Low power Arm, and fall back on x86 when your applications demand it.
Would be the type of thing that would enable Microsoft to make a serious attempt to shift Windows to another ISA. Use the ARM ISA for the OS and newer applications, have an x86 fall back for legacy applications that require it, without needing an emulation layer. This would allay business concerns over compatibility in an industry wide transition.
29 years ago, ARM's primary customer and original parent company released the Risc PC, which used an ARM processor but could also accept a 486 processor beside it.
They were on separate cards, but it meant you could run DOS/Windows programs within the Acorn RISC OS.
I vaguely recall reading about Microsoft's x86/amd64 emulation on ARM, that allegedly performed pretty well. So moving to ARM would not necessarily mean giving up on backwards compatibility.
IMHO x86 has better existing facilities for different operating modes and virtualisation --- going back to the 286 protected mode descriptor types --- that make it a good fit for being the "host" of an ARM or other architecture. Also, the x86/PC boot process is far better understood and open than those of the widely differing ARM SoCs.
I feel there are a lot of folks in MS that understand how important it is to have a consistent boot process. They tried to standardize this with the boot system of Windows RT by having it map closely to that of BIOS. I think they just wanted to have ARM run like x86 by having a standard interface so they aren't building multiple different binaries for different machines.
Of course this was basically bolting a good idea to an cement block dropped in the ocean, it didn't help that they had locked keys on secure boot so that users couldn't fiddle with this.
How do you figure that the x86 is more open? I can download and build all the code for an ARM system right now. There are even standards docs for most of the individual pieces.
I can't do the same with any modern x86 systems and the one company trying (purisim) had to do some pretty serious reverse engineering of Intel's blobs to get there.
This sounds positive. When China invades Taiwan and the US military destroys the TSMC fabs, high-end Arm manufacturing can be taken over by both Samsung and Intel. Still, there will be a years long massive chip crisis, until enough fabs have been built to meet demand.
>When China invades Taiwan and the US military destroys the TSMC fabs
I'm shocked the sentiment has gone so quickly from "things that are not gonna happen" to "just in case" to "when". Every armchair politician on HN for the last 5 years has been regurgitating -- island is too far from China for a sea invasion, straight of malaka consequences etc
The state of the world five years ago was drastically different to what it is today. Being a warmongerer and dictating diplomatic affairs with real mobilization of brute force has become a lot more realistic and practical now.
China is serious about their One China policy and it's only a question of when they think it's their moment to strike. Russia has already demonstrated for them, sacrificial lamb style, that warmongering does not actually carry significant penalties on the world stage if you're already estranged from the west to begin with.
And before anyone says I'm a Russian and/or Chinese apologist: No. I fucking hate this state of affairs. I would much prefer a world in which we can live in relative peace. Sadly, we aren't and can't back(ing) up our desires and demands for peace with tangible repercussions against those that violate it.
"Massive chip crisis" doesn't even begin to describe it. This would be a catastrophe worse than the great depression, on a globl scale. Every single electronic item would be rationed. Consumer appliance purchases would barely exist. You would be using the same phone/laptop/etc. for 10 years.
Yeah but think of how long it will take for the ripples to settle, if it ever could in our lifetime. We're still dealing with supply shocks from the initial panic around COVID and no fab capacity was actually lost then.
I'm very curious to see the regular of this, but my hopes are not high. Foundries and fabless design companies are killing themselves over single digit percentage improvements in PPA. I can't see this doing much, though it will be good for everyone to have a competitive and available manufacturing process.
I read the article. What does it mean to optimize an Arm design for an Intel node process? They're going to make it easier for Intel fabs to make Arm chips? Is this because of the new Intel fab to be built in the US? Also, does that mean Intel will be getting more into producing Arm chips for themselves?
In these very advanced manufacturing nodes, your choice of transistor dimensions are quantized and very limited (it used to be continuous). Customers generally get what they get, and have to do their best with what's available.
With DTCO, the idea is that ARM can do some initial design and see what their PPA looks like, then feed that information back to Intel who can then tweak the manufacturing process itself to make improvements specific to the ARM product. Similarly, ARM can optimize their design based on feedback from Intel about what is possible with their process. (Hence co-optimization).
The impression I get is that IFS is still in need of customers, and they probably got a big chunk of cash from the US government to help develop the business. If they can show that a particular ARM core performs better on their process, it would drive a lot of business.
I generally read the comments before the article, then article, then come back to the comments.
I don't know how popular that flow is but I have to imagine I'm not the only one.
It prevents me from having to read bullshit articles if it's already called out in the comments, and it highlights bits to pay extra attention to in the article.
Is this now making visible the end of the road for x86? In any case reducing energy needs will be a major evolutionary driver going forward. Both because environmental impact will be increasingly priced and the ever growing importance of battery powered mobile devices.
It doesn't very much, in theory. Figuring out the instruction boundaries in x86 instructions in parallel is rather involved but after code has been seen once the boundaries can be marked with an extra bit in the L1 instruction cache and in loops you're mostly using decoded instructions cache anyways. The strict sequential consistency model of x86 versus ARM probably has a lot of implications for how the cache hierarchy but I couldn't speculate in detail.
However, x86 has a ton of cruft in the instruction set which has to be implemented and keep working despite whatever microarchitectural changes happen. You have to worry about how your Spectre mitigations interact with call gates that haven't been used much since the 286, for instance. That's a lot of extra design work and even more extra verification work that has to happen with each new x86 core, which I think is most of the advantage that ARM has.
I don't know what sort of conflict of interest you might have on the matter. I have none. Just interested in being a good citizen and minimizing the environmental footprint of digital tech.
There might be semantic splitting of hairs as to how one controls for different design features so that it is a proper "like-for-like" comparison. But for the longest time the narrative is that the ARM/RISC type optimisation is favorable for power consumption. "The use of ARM based systems has shown to be a good choice when power efficiency is needed without losing performance." [1]
Not an expert in this stuff and would stand corrected if, e.g. both this paper and the related narrative is all a devious plot against x86.
Why does Intel need Arm for collaboration, and what does Arm get out of optimizing "Arm’s IP for Intel’s upcoming 18A process technology?" Why doesn't Intel license Arm designs like anyone else, and optimize it themselves? I also don't quite get why Intel doesn't just design their own competing high efficiency architecture, after abandoning x86 and backwards compatibility, of course, something they should have done two decades ago at least.
This is IFS, Intel Foundry Services. They are a fab that will make parts for anyone that pays them. This isn't Intel the x86 vendor, that is a different business unit.
> I also don't quite get why Intel doesn't just design their own competing high efficiency architecture, after abandoning x86 and backwards compatibility, of course, something they should have done two decades ago at least.
They've tried, at least twice. Itanium and Atom come to mind. It turns out, it's not as easy as it sounds, even back when Intel was near the top of its game.
Atom cores are x86, and aren't just close; they're near exactly the same. They use the Gracemont microarchitecture in the Efficiency cores, which is the fourth generation Atom built on Intel 7.
My understanding was that Atom was x86 instruction set (or whatever Intel calls amd64?), but its own arch. I very easily could be wrong about that though.
Itanium was supposed to be powerful, not efficient, and it originated at HP. Atom was x86. If Intel designed something new from the ground up with high efficiency specifications, I don't think it could be too terrible, and I think it would advance SotA to have real competition with ARM designs. The i860 may be Intel's last innovative chip design solely designed in house. Every advance in x86 is just another ugly monstrosity.
Intel bought the StrongARM line if ARM cpus from DEC (used in some PDAs of the late 90s) and rebranded it XScale. Had some minor successes, but no huge design wins. Sold it to Marvell in 2006 right before the iPhone was released and smartphones exploded the market for ARM cpus.
Intel is already an long-term Arm licensee[1], meaning that they can build and optimise Arm designs for their own use. I'd expect the terms of an Arm IP license restricts them from sublicensing any Intel-optimised Arm designs to foundry customers. If I understand correctly, this deal is Intel Foundries in future being able to offer the implementation of optimised Arm designs to customers.
Even if Intel design something similar to ARM, legacy wins. It’s not replacing mobile OS any time soon. Just like ARM isn’t replacing the Windows ecosystem soon.
As others have said Intel failed. There were Intel phones back in the days. Maybe they should have kept going even if it lost money. Maybe not. Who knows.
Not to mention Apple isn’t moving regardless since they were 1 of the founders of ARM.
20 years ago, probably. I don't understand why it is still the case today that legacy is important. Who is still running very old, 25yo software, and just how are they wagging the dog?
SF's subway system is still using floppy disks. Being on HN might feel like everyone is on Rust or Rails or something else but sadly in the real world under the hood there's lots and lots of legacy - yes important 1s too.
> I also don't quite get why Intel doesn't just design their own competing high efficiency architecture, after abandoning x86 and backwards compatibility, of course, something they should have done two decades ago at least.
Intel structurally sucks. That’s the simple answer.
That says nothing about the competition between ARM and Intel in the server or pc CPU markets. It's not about Intel's future CPU designs.
(Some people eg sitkack made the same point already in replies, just think it needs pulling to the top level)
Intel's Foundry business has long played second fiddle to the CPU business, but it seems like recently they've gotten serious about it due to the weakness of the CPU business at present. And many big SoC vendors would like an alternative to TSMC, which has become rather dominant.