Hacker News new | past | comments | ask | show | jobs | submit login
Intel Problems (stratechery.com)
569 points by camillovisini on Jan 19, 2021 | hide | past | favorite | 426 comments



I think one day we’re going to wake up and discover that AWS mostly runs on Graviton (ARM) and not x86. And on that day intel’s troubles will go from future to present.

My standing theory is that the m1 will accelerate it. Obviously all the wholly managed AWS services (Dynamo, Kinesis, S3, etc.) can change over silently, but the issue is EC2. I have a MBP, as do all of my engineers. Within a few years all of these machines will age out and be replaced with m1 powered machines. At that point the idea of developing on ARM and deploying on x86 will be unpleasant, especially since Graviton 2 is already cheaper per compute unit than x86 is for some work loads; imagine what Graviton 3 & 4 will offer.


> I have a MBP, as do all of my engineers. Within a few years all of these machines will age out and be replaced with m1 powered machines. At that point the idea of developing on ARM and deploying on x86 will be unpleasant

Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops? Agreed that developing on ARM and deploying on x86 is unpleasant, but so too is developing on macOS and deploying on Linux. Apple’s GNU userland is pretty ancient, and while the BSD parts are at least updated, they are also very austere. Given that friction is already there, is it likelier that folks will try to alleviate it with macOS in the cloud or GNU/Linux locally?

Mac OS X was a godsend in 2001: it put a great Unix underneath a fine UI atop good hardware. It dragged an awful lot of folks three-quarters of the way to a free system. But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted). Meanwhile, the negatives of using a proprietary OS are worse, not better.


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Has Linux desktop share been increasing lately? I'm not sure why a newer Mac with better CPU options is going to result in increasing Linux share. If anything, it's likely to be neutral or favor the Mac with it's newer/ faster CPU.

> But frankly I believe Apple have lost ground UI-wise over the intervening decades, while free alternatives have gained it (they are still not at parity, granted).

Maybe? I'm not as sold on Linux gaining a ton of ground here. I'm also not sold on the idea that the Mac as a whole is worse off interface wise than it was 10 years ago. While there are some issues, there are also places where it's significantly improved as well. Particularly if you have an iPhone and use Apple's other services.


As much as I would like it to happen, I think it's unlikely Linux will be taking any market share away from Macs. That said, I could imagine it happening a couple ways. The first being an increasingly iPhonified and restricted Mac OS that some devs get fed up with.

The second would be Apple pushing all MacBooks to M1 too soon, breaking certain tools and workflows.

While I think both of those scenarios could easily happen, most devs will probably decide just put up with the extra trouble rather than switch to Linux.


> The second would be Apple pushing all MacBooks to M1 too soon, breaking certain tools and workflows.

I don't think this is an issue. Apple has been fairly straight about timelines and people who want to stick with Intel have plenty of time and options to get their stuff in a row in advance. More important, Apple fixed most of the major irritations with the MacBook Pro. If they hadn't launched the 16" MBP last year, people would have been stuck with a fairly old/ compromised design.

I suspect Apple is going to maintain MacOS right where it is now. People have been worried about "iOSification" for nearly a decade now and while there have been some elements, the core functionality is fundamentally the same.


Big Sur has me more worried about iOS-ification than ever before. The UI is a train wreck. It looks designed for touch and I have no idea why. I guess Sidecar?

They changed a ton about the UI in Big Sur, none of it for the better as far as I can tell. They took away even more choice, I can't have a "classic" desktop experience.

My biggest frustration is that they were willing to make such drastic changes to the desktop UI at all. I have to re-learn a ton of keyboard navigation now. And there doesn't seem to be a coherent design to the workflows.

Such a drastic change seems to be an admission that they thought the previous design language was wrong but they seem to have replaced it with... no vision at all?

I am hopeful for the next generation of M1 MacBook Pros and whatever the next MacOS is. Hopefully they get their design philosophy straight and stick with it.


I quite like the Big Sur UI and definitely don't consider it a train wreck. I've been a Mac user since the PowerBook G4, and the M1 MacBook Air with Big Sur is the best "desktop" computer I've ever owned. I have it connected to a beautiful 32" display, it's fast, silent and I find the UI very usable

I see a strong vision in Big Sur, one that pulls macOS visually into line with iOS, but there are a lot of rough edges right now. Especially with some Catalyst apps bringing iOS paradigms onto Mac. Even Apple's Catalyst apps (News in particular) are just gross, can't even change the font size with a keyboard shortcut


Visually it is a dumpster fire. Window borders are inconsistently sized, the close/minimize/full screen buttons aren’t consistently placed.

There’s an enormous amount of wasted space. You need that 32” monitor. My 16” RMBP now has as much usable space as my 13”.

Keyboard navigation is bizarre. Try this:

1) Open Mail.

2) Cmd W.

3) Cmd 1 (or is it Cmd 0?!, hint: it’s whatever Messages isn’t!)

4) Without using your mouse select a message in your inbox.

5) Without using your mouse navigate to a folder under your inbox.

6) Without using your mouse navigate to an inbox in another account.

All of this is possible in Catalina and earlier with zero confusion (selections are highlighted clearly) and can be done “backwards” using shift. In Big Sur some of it is actually impossible and you have to just guess where you start.

When native apps aren’t even consistent in their behavior and appearance that is a trainwreck.

You may be able to read the tea leaves and see a grand vision here but I have to use a half-baked desktop environment until Not-Jony Ive is satisfied they have reminded me they are not Jony Ive. To them I say, trust me, I noticed.


I don't have a problem with using the smaller MacBook Air 13" screen for development (I upgraded from a 15" MBP)

I'm not a huge Mail user so I'm not up-to-speed on keyboard shortcuts. But I was able to navigate with the keyboard easily — I hit tab to focus on the correct list then use the up/down arrows to select the inbox or message (your 4/5/6). After hitting Cmd+W both Cmd+0 or Cmd+1 bring the Mail window back for me. And Cmd+Shift+D still sends mail which is the main one I use

I am a huge Xcode user and primarily use the keyboard for code navigation, and that is as good as ever on the 13" MacBook Air in Big Sur. Also use Sketch a lot, and that has been just great too

I guess we have a very different perception of Big Sur but mine is generally favourable, and I don't see the wasted space that you see. There have been a few weekends where I have done all my work on the 13" MBA, which is only just now possible due to battery life, and the experience has been really, really nice


Yes but those shortcuts are inconsistent. Cmd 1 does not bring back Messages for example. I have to press tab several times in Mail to figure out where I am in Big Sur where in Catalina the selection is always highlighted. You can’t use the keyboard to change mailboxes in Big Sur as far as I can tell. There’s no clear “language” to the shortcuts.

You’d have more usable space with Catalina on that 13” screen. It was a noticeable loss of space upgrading from Catalina to Big Sur on a 16” RMBP. I used to have space for three partially overlayed windows on my 16” screen. Now I am lucky to get two. Usable space on a 16” Big Sur MacBook is similar to a 13” Catalina MacBook. I have both. My workflows changed. There is no benefit to me as a user.

Take a look at this visual comparison: https://www.andrewdenty.com/blog/2020/07/01/a-visual-compari...

Look at the “traffic light” buttons. Note how much thicker the top bar is in Big Sur. It’s 50% taller! That’s a lost row of text.


Both Messages and Mail both use Cmd+0 to bring the message viewer to the front. That's what the shortcut is listed as in the Window menu. Same with Calendar if you close the main calendar window

Messages does not have great keyboard navigation — I can't tab around like I can in Mail. I am putting this down to the fact it is a Catalyst app and they are a bit sloppy with consistency (not necessarily a Big Sur thing, as these were on Catalina)


You’re right about Messages vs Mail.

But in Catalina Cmd-1 selects the inbox and full sidebar navigation including between accounts is possible with arrow keys. Nested lists can expand and collapse with left and right.

In Big Sur Cmd-1 just reopens the window with no selection. In addition you cannot navigate the full sidebar with arrow keys.

Combine this with the lack of visual indication of your selection and keyboard navigation becomes a struggle.

I have not found a UI improvement in Big Sur.


> Such a drastic change seems to be an admission that they thought the previous design language was wrong but they seem to have replaced it with... no vision at all?

Sure, but Jonny Ive just officially left. He was head of all design (when he should have been head of hardware design only). It’s natural that there would be greater-than-usual changes as someone new took over.

Big Sur has some pretty terrible changes. There’s nothing surprising that changes occurred. The only surprise is how bad they are.


Yeah exactly. I understand what is going on here. I’m just frustrated I have to suffer through it for the vanity of Not-Jony-Ive.


The frustrating thing for me is the whole idea of rebooting things visually when it's not based on productivity or additional features. So much of this is just a visual reboot.

Overall though when I hear about the "iOSification" I worry foremost about locking down the OS which doesn't seem to be a big issue.

My personal computer is on Big Sur, but I've kept my work laptop on Catalina so a lot of this doesn't hit me work wise. It doesn't seem too horrible to me when working on private projects but that's a small percentage of my time.


> My biggest frustration is that they were willing to make such drastic changes to the desktop UI at all.

I feel you there, one of the reasons I didn't like Windows is big, seemingly random UI changes. I don't think Bir Sur is crazy like Windows NT -> Windows Vista crazy, just feels like a big change for MacOS which has been relatively stable for a few years.

Seems like Apple tends to go long when they make big sweeping UI changes, then in the following releases they dial things back or work out the glitches. It's frustrating for certain.


> Has Linux desktop share been increasing lately?

At least I feel I see a lot more Linux now, not just in the company I work for but also elsewhere.

The speed advantage over Windows is so huge that it is painful to go back once you've seen it.

MS Office isn't seen as a requirement anymore, nobody think it is funny if you use GSuite and besides, last time I used an office file for work is months ago: everything exist inæ Slack, Teams, Confluence or Jira and these are about equally bad on all platforms.

The same is true (except the awful part) for C# development: it is probably even better on Linux.

People could switch to Mac and I guess many will. For others, like me Mac just doesn't work and for us Linux is an almost obvious choice.


You have to be clear that you're talking about developers here.

For the rest of the company, MS Office may very well be essential. In particular, Excel is king for many business functions.


At the company I work for (400 people in Norway, offices in Sweden, Denmark and Romania as well) I don't know anyone who has a MS Office license.

There is a procedure to get one, but so far I don't know anyone who used it. Quite on the contrary I know one of our sales guys used to install and run Linux on his laptop.

Yes, I work and an above average technical company but we do have HR, sales etc and they aren't engineers (at least not most of them).

(email in my profile ;-)


> At least I feel I see a lot more Linux now, not just in the company I work for but also elsewhere.

I see it far more now than I did ~5-10 years ago when it was my daily driver. I'm just not sure if it's gotten a baseline support and flatlined or if it's growing consistently now.

Fully agree with almost all of your points, and if MacOS did go off the deep end in terms of functionality, I'd be back on Linux. It's why I'm a big fan of what Marcan is doing and follow it closely.

If Linux support was as good now as it was when I switched, I'd likely have never switched to the Mac.

But now I just stick around for the hardware.


> Has Linux desktop share been increasing lately?

I run on Mac for desktop, Linux for projects. I don’t know that more people have been switching than before, but I thought the author of this[0] piece illustrated well that it’s easier than ever. That said, the author states that they opt not to have a phone. That’s much less lock-in than most of the market.

[0]https://orta.io/on/migrating/from/apple


Fully agree, seems like a good chunk of developer focused software is cross platform at this point.


I develop on GNU/Linux begrudgingly. It has all of my tools, but I have a never-ending stream of issues with WiFi, display, audio, etc. As far as I'm concerned, GNU/Linux is something that's meant to be used headless and ssh'd into.


What distro do you use? I switched from Mac to Pop, and it’s great.

I had already decided that 2021 would be my year of the Linux desktop, but Apple forced my hand a bit early. My 2019 Mac’s WiFi went. Had to lose my primary dev machine for over a week as it shipped out to have a bunch of hardware replaced.

So, I built a PC with parts that have good Linux support. I think that’s the key. I imagine System76 machines would run smoothly. It’s definitely not as smooth a UX as Mac, but I like being in an open ecosystem on a machine I can repair. And it’s had a number of perks such Docker running efficiently for once.

Edit: I can now trivially repair my computer if it breaks. The entire rig cost about 1/4 what my MacBook cost, and it is much faster.


For what its worth I've been thinking about going to a Mac M1 laptop for email/HR plus an small-form-factor Ubuntu box for development (i.e. Intel NUC, Asus PN-50) that I can ssh into and run headlessly locally.

With a hybrid 1-2 days in the office and the fact that the NUC form factor has gotten smaller and higher performance, I could make a case to actually stick it in my laptop bag for the 1-2 days and do email/surf only on my train commute. I was never really that productive working on the train anyways.


Try a Raspberry Pi. It just works.


Except for when the wifi fails or your SD card that you boot off of gets corrupted or you try to do something that requires a bit more power than 1.2GHz and 1GB RAM can handle.


Raspberry Pi 4 has up to 8GB of RAM and a 1.5GHz CPU, but your point still stands. Even with those specs it won't provide a fully smooth desktop experience.


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

And I personally hope that by then, GNU/Linux will have an M1-like processor available to happily run on. The possibilities demonstrated by this chip (performance+silence+battery) are so compelling that it's inevitable we'll see them in non-Apple designs.

Also, as it usually happens with Apple hardware advancements, Linux experience will be gradually getting better on M1 Macbooks as well.


I think we can look to mobile to see how feasible this might be: consistently over the past decade, iPhones have matched or exceeded Android performance with noticeably smaller capacity batteries. A-series chips and Qualcomm chips are both ARM. Apple's tight integration comes with a cost when it comes to flexibility, and, you can argue, developer experience, but it's clearly not just the silicon itself that leads to the performance we're seeing in the M1 Macs.


I think there are serious concerns about Qualcomm's commitment to competitive performance instead of just being a patent troll. I think if AWS Graviton is followed by Microsoft[0] and Google[1] also having their own custom ARM chips it will force Qualcomm to either innovate or die. And will make the ARM landscape quite competitive. M1 has shown what's possible. MS and Google (and Amazon) certainly have the $$ to match what Apple is doing.

0:https://www.datacenterdynamics.com/en/news/microsoft-reporte... 1:https://www.theverge.com/2020/4/14/21221062/google-processor...


That's why they acquired Nuvia.


I wonder to what extent that's a consequence of Apple embracing reference counting (Swift/Objective C with ARC) while Google being stuck on GC (Java)?

I'm a huge fan of OCaml, Java and Python (RC but with cyclic garbage collection), and RC very likely incurs more developer headache and more bugs, but at the end of the day, that's just a question of upfront investment, and in the long run it seems to pay off - it's pretty hard for me to deny that pretty much all GC software is slow (or singlethreaded).


Java can be slow for many complex reasons, not just GC. Oracle are trying to address some of this with major proposals such as stack-allocated value types, sealed classes, vector intrinsics etc, but these are potentially years away and will likely never arrive for Android. However, a lot of Androids slowness is not due to Java but rather just bad/legacy architectural decisions. iOS is simply better engineered than Android and I say this as an Android user.


Not to mention it took Android about a decade longer than iPhone to finally get their animations silky smooth. I don't know if the occasional hung frames were the results of GC, but I suspect it.


Approximately zero MacBooks will be replaced by Linux laptops in the next couple years. There is no new story in the Linux desktop world to make a Linux Laptop more appealing. That people already selected to develop on MacOS and deploy to Linux tells you all you need to know there.

MacPorts and Homebrew exist. Both support M1 more or less and support is improving.

Big Sur is a Big Disaster but hopefully this is just the MacOS version of iOS 13 and the next MacOS next year goes back to being mostly functional. I have more faith in that than a serviceable Linux desktop environment.


> Big Sur is a Big Disaster

I may be naive but that seems like hyperbole. I updated to Big Sur shortly after is was released. It was a little bumpy here and there, but not more than any other update. I'd even argue it has been as smooth or smoother than any of my Linux major upgrades in the past 20 years.


I haven't updated but my impression has been that this is a "tock" release, as in "yeah we shipped some big UI changes but we'll refine them, very little of substance changed in the underlying systems". And that's great, and I look forward to being a medium-early adopter before the worst of the UI gets a big refresh next year.


Yeah that’s a great description.

I upgraded my personal laptop and it’s so bad I’m holding off on my work laptop until I am forced to upgrade.


It is mostly functional but visually a mess. “Better than Linux upgrades” is a pretty low bar.


This is why you run your environment on Linux and MacOS in Docker, so you don't have these screwy deployment issues caused by MacOs vs Linux issues.


Developing in macOS against Docker is beyond painful. 10-100x build times is not a reasonable cost for platform compat.


With some configuration you can map folders into drives on Docker. You can just build on MacOS and restart the app server in the container. I was using NodeJS though, so I'm not dealing with libc and syscall incompatibilities and such.


docker om MacOs is a second class citizen, because it runs in a VM. The networking is much more complicated because of this, which causes endless amounts of hard-to-debug problems and performance is terrible.


You can easily bring macOS up to Linux level GNU with brew.

I agree generally though. I see macOS as an important Unix OS for the next decade.


"Linux" is more than coreutils. The Mac kernel is no where close to Linux in capability and Apple hates 3rd party drivers to boot. You'll end up running a half-baked Linux VM anyway so all macOS gets you is a SSH client with a nice desktop environment, which you can find anywhere really.


> all macOS gets you is a SSH client with a nice desktop environment

Also proprietary software. Unfortunately, many people still need Adobe.

I personally like Krita, Shotcut, and Darktable better than any of the Adobe products I used to use, but it's a real issue.

E: Add "many people"


The micro-kernel design on macOS has benefits over Linux's monolithic kernel.

You also get POSIX compliance.


macOS doesn't have a microkernel, but it does have userland drivers and it's pretty good at being macOS/iOS. Linux's oom-killer doesn't work nearly as well as jetsam.


macOS has a Hybrid kernel with a decent portion being micro components I thought?


Mach started as a microkernel, but when they jammed together Mach and BSD they put it in the same process so it's not really separated anymore.

Recently there are some hypervisor-like things for security, and more things have been moving to userland instead of being kexts. I'd still say it's less of a microkernel than Linux since it doesn't have FUSE.


> bring macOS up to Linux level GNU with brew.

Ugh, not even.

Simple and straightforward shell scripts that work on decade-old Linux distributions don't work on Big Sur due to ancient macOS bsd tooling.

Just look at stackoverflow or similar sites: every answer has a comment "but this doesn't work on my macOS box."

(I've had to add countless if-posix-but-mac workarounds for PhotoStructure).


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Sadly, fewer of my coworkers use Linux now than they did 10 years ago.


> GNU/Linux laptops

Could we do a roll call of experiences so I know which ones work and which ones don't? Here are mine.

    Dell Precision M6800: Avoid.
        Supported Ubuntu: so ancient that Firefox
        and Chrome wouldn't install without source-building
        dependencies.
        Ubuntu 18.04: installed but resulted in the
        display backlight flickering on/off at 30Hz.

    Dell Precision 7200:
        Supported Ubuntu: didn't even bother.
        Ubuntu 18.04: installer silently chokes on the NVMe
        drive.
        Ubuntu 20.04: just works.


Historically, Thinkpads have had excellent support. My T430S is great (although definitely aging out), and apparently the new X1 Carbons still work well. Also, both Dell and Lenovo have models that come with Linux if desired, so those are probably good ones to look at.


I'll have to look into modern thinkpads. I had a bad experience about ~10 years ago, but it wouldn't be fair to bring that forward.

> both Dell and Lenovo have models that come with Linux

Like the Dell Precision M6800 above? Yeah. Mixed bag.


Have used linux on thinkpads since the 90s.

Rules of thumb: - Use older thinkpads off business lease is great - stick to intel video graphics - max out the ram and upgrade storage (NVMe)


Most companies wouldn't end up trying to shove a disk into a computer though, they would buy from a vendor with support and never have compatibility issues. I have owned 3 System76 computers for this reason...


> they would buy from a vendor with support

Like the Dell Precision 6800 above? The one where the latest supported linux was so decrepit that it wouldn't install Firefox and Chrome without manually building newer versions of some of the dependencies?

"System76 is better at this than Dell" is valid feedback, but System76 doesn't have the enterprise recognition to be a choice you can't be fired for.

Maybe ThinkPads hit the sweet spot. I'll have to look at their newer offerings.


> Is it not at least somewhat possible that at least some of those Apple laptops will age out and be replaced with GNU/Linux laptops?

Some definitely will. Significant enough to assume they're not well-situated other-configs? Probably not. Even the most VIM- and CLI-oriented devs I know still prefer a familiar GUI for normal day to day work. Are they all going Ubuntu? Or Elementary? I mean, I welcome any migration that doesn't fracture the universe. But I don't think it's likely.


There is literally no chance of that. IT would find this an intolerable burden for them to manage, and I doubt the devs would like it either. Most of them seem pretty enthused to get their hands on a m1.

I’ve known colleagues that tried to run Linux professionally using well reviewed Linux laptops, and their experience has been universally awful. Like “I never managed to get the wifi to work, ever” bad. The idea of gambling every developer on that is a non-starter even at my level, let alone across the org.


FWIW, I have been running Linux as my desktop since 1999, and on a laptop since the mid-2000s. It is doable, and I no longer have the problems which used to be common. Once upon a time Ethernet, WiFi & graphics were screwy, but not for a long time now.


Building server software on Graviton ARM creates a vendor lock-in to Amazon, with very high costs of switching elsewhere. Despite using A64 ISA and ARM’s cores, they are Amazon’s proprietary chips no one else has access to. Migrating elsewhere gonna be very expensive.

I wouldn’t be surprised if they sponsor their Graviton offering taking profits elsewhere. This might make it seem like a good deal for customers, but I don’t think it is, at least not in the long run.

This doesn’t mean Graviton is useless. For services running Amazon’s code as opposed to customer’s code (like these PAAS things billed per transaction) the lock-in is already in place, custom processors aren’t gonna make it any worse.


I'm not necessarily disagreeing with you, but... maybe elaborating in a contrary manner?

Graviton ARM is certainly vendor lock-in to Amazon. But a Graviton ARM is just a bog-standard Neoverse N1 core. Which means the core is going to show similar characteristics as the Ampere Altra (also a bog-standard Neoverse N1 core).

There's more to a chip than its core. But... from a performance-portability and ISA perspective... you'd expect performance-portability between Graviton ARM and Ampere Altra.

Now Ampere Altra is like 2x80 core, while Graviton ARM is... a bunch of different configurations. So its still not perfect compatibility. But a single-threaded program probably couldn't tell the difference between the two platforms.

I'd expect that migrating between Graviton and Ampere Altra is going to be easier than Intel Skylake -> AMD Zen.


> you'd expect performance-portability between Graviton ARM and Ampere Altra

I agree, that would what I would expect too. Still, are there many public clouds built of these Ampere Altra-s? Maybe we gonna have them widespread soon, but until then I wouldn’t want to build stuff that only runs on Amazon or my own servers with only a few on the market and not yet globally available on retail.

Also, AFAIK on ARM the parts where CPUs integrate with the rest of the hardware are custom. The important thing for servers, disk and network I/O differs across ARM chips of the same ISA. Linux kernel abstracts it away i.e. stuff is likely to work, but I’m not so sure about performance portability.


> Also, AFAIK on ARM the parts where CPUs integrate with the rest of the hardware are custom. The important thing for servers, disk and network I/O differs across ARM chips of the same ISA. Linux kernel abstracts it away i.e. stuff is likely to work, but I’m not so sure about performance portability.

Indeed. But Intel Xeon + Intel Ethernet integrates tightly and drops the Ethernet data directly into L3 cache (bypassing DRAM entirely).

As such, I/O performance portability between x86 servers (in particular: Intel Xeon vs AMD EPYC) suffers from similar I/O issues. Even if you have AMD EPYC + Intel Ethernet, you lose the direct-to-L3 DMA, and will have slightly weaker performance characteristics compared to Intel Xeon + Intel Ethernet.

Or Intel Xeon + Optane optimizations, which also do not exist on AMD EPYC + Optane. So these I/O performance differences between platforms are already on the status-quo, and should be expected if you're migrating between platforms. A degree of testing and tuning is always needed when changing platforms.

--------

>Still, are there many public clouds built of these Ampere Altra-s? Maybe we gonna have them widespread soon, but until then I wouldn’t want to build stuff that only runs on Amazon or my own servers with only a few on the market and not yet globally available on retail.

A fair point. Still, since Neoverse N1 is a premade core available to purchase from ARM, many different companies have the ability to buy it for themselves.

Current rumors look like Microsoft/Oracle are just planning to use Ampere Altra. But like all other standard ARM cores, any company can buy the N1 design and make their own chip.


> > Also, AFAIK on ARM the parts where CPUs integrate with the rest of the hardware are custom. The important thing for servers, disk and network I/O differs across ARM chips of the same ISA. Linux kernel abstracts it away i.e. stuff is likely to work, but I’m not so sure about performance portability.

> Indeed. But Intel Xeon + Intel Ethernet integrates tightly and drops the Ethernet data directly into L3 cache (bypassing DRAM entirely).

This will be less of a problem on ARM servers as direct access to the LLC from a hardware master is a standard feature of ARM's "Dynamic Shared Unit" or DSU, which is the shared part of a cluster providing the LLC and coherency support. Connect a hardware function to the DSU ACP (accelerator coherency port) and the hardware can control, for all write accesses, whether to "stash" data into the LLC or even the L2 or L1 of a specific core. The hardware can also control allocate on miss vs not. So any high performance IP can benefit from it.

And if I understand correctly, the DSU is required with modern ARM cores. As most (besides Apple) tend to use ARM cores now, you have this in the package.

More details here in the DSU tech manual: https://developer.arm.com/documentation/100453/0002/function...


> I'd expect that migrating between Graviton and Ampere Altra is going to be easier than Intel Skylake -> AMD Zen.

Could you explain what migration problems are between Skylake and Zen, beyond AVX-512 ?


Ubuntu 64 looks the same on Graviton as on a Raspberry Pi. You can take a binary you've compiled on the RPi, scp it to the Graviton instance and it will just run. That works the other way round too, which is great for speedy Pi software builds without having to set up a cross-compile environment.


Yep just Aarch64. Probably can use qemu too.

Cross compilation is no big deal these days. Embedded devs cross compile to ARM all day every day.

The tooling will be there when it needs to be.


My Java and .NET applications don't care most of the time in what hardware they are running, and many of other languages managed languages I use also do not, even if AOT compiled to native code.

That is the beauty of having proper defined numeric types and memory model, instead of the C and derived approaches of whatever the CPU gives, with whatever memory model.


I think OP was talking about managed services, like lambda, Ecs and beanstalk internal control, EC2 internal management system, that is systems that are transparent for the user.

AWS could very well run their platform systems entirely on graviton. After all, serverless and cloud is in essence someone else's server. AWS might as well run all their paas software on in-house architecture


While there is vendor lock in with those services, it also has nothing to do with what CPU you are running. At that layer, CPU is completely abstract.


Maybe I wasn't clear enough. I am talking about code that runs behind the scenes. Management processes, schedulers, server allocation procedures, everything that runs on the aws side of things, transparent for the client.


Maybe I'm missing something, but don't the vast majority of applications don't care about what architecture they run on?

The main difference for us was lower bills.


> Maybe I'm missing something, but don't the vast majority of applications don't care about what architecture they run on?

There can be issues with moving to AArch64, for instance your Python code may depend on Python 'wheels' which in turn depend on C libraries that don't play nice with AArch64. I once encountered an issue like this, although I've now forgotten the details.

If your software is pure Java I'd say the odds are pretty good that things will 'just work', but you'd still want to do testing.


Sure, but you're talking about short term problems. RPi, Graviton, Apple Silicon, etc... are making AArch64 a required mainstream target.


That's true. AArch64 is already perfectly usable, and what issues there are will be ironed out in good time.


Even if the applications don't care, there's still the (Docker) container, which cares very much, and which seems to be the vehicle of choice to package and deliver many cloud-based applications today. Being able to actually run the exact same containers on your dev machine which are going to be running on the servers later is definitely a big plus.


Docker has had multiarch support for a while and most of the containers I’ve looked at support both. That’s not to say this won’t be a concern but it’s at the level of “check a box in CI” to solve and between Apple and Amazon there’ll be quite a few users doing that.


Our experience as well. We run a stack that comprises Python, Javascript via Node, Common Lisp and Ruby/Rails. It's been completely transparent to the application code itself.


> they are Amazon’s proprietary chips no one else has access to.

Any ARM licensee (IP or architecture) has access to them. They're just NeoVerse N1 cores and can be synthesized on Samsung or TSMC processes.


Really, you could make the argument for any AWS service and generally using a cloud service provider. You get into the cloud, use their glue (lambda, kinesis, sqs etc) and suddenly migrating services somewhere else is a multi-year project.

Do you think that vendor lock in has stopped people in the past (and future)? Thinking about those kinds of things are long term and many companies think short term.


Heck, Amazon themselves got locked-in to Oracle for the first 25 years of Amazon's existence. Vendor lock-in for your IT stack doesn't prevent you from becoming a successful business.


True, true (and heh, it was me who pushed for Oracle, oops)

But ... the difference is that Oracle wasn't a platform in the sense that (e.g.) AWS is. Oracle as a corporation could vanish, but as long as you can keep running a compatible OS on compatible hardware, you can keep using Oracle.

If AWS pulls the plug on you, either as an overall customer or ends a particular API/service, what do you do then?


You post a nasty note on Parler!

Oh wait.


Why would it be lock in. If you can compile for arm you can compile for x86.


Memory model, execution units, simd instructions...


The vast majority of code running is in python, js, jvm, php, ruby, etc. Far removed these concerns.


Some of those languages (especially python & php) utilise C based modules or packaged external binaries. Both of which have to be available / compatible with ARM.

When you run pip or composer on Amd64 they often pull these down and you don't notice, but if you try on arm you discover quickly that some packages don't support ARM. Sometimes there is a slower fallback option, but often there is none.


Those are pretty minor issues that will be fixed as arm servers get more popular


The real question is, can you compile for ARM and move the binary around as easily as you can for x86?

I'm reasonably sure that you can take a binary compiled with GCC on a P4 back in the day and run it on the latest Zen 3 CPU.


As far as I can tell, yes. Docker images compiled for arm64 work fine on the Macs with M1 chips without rebuilding. And as another commenter said, you can compile a binary on a Raspberry Pi 4 and move it to a EC2 graviton instance and it just works.


it will probably be a similar situation to x86, with various vendors implementing various instructions in some processors that won't be supported by all. I guess the difference is that there may be many more variants than in x86, but performance-critical code can always use runtime dispatch mechanisms to adapt.


It's true that there are extensions to x86, but 99,99% of software out there (the one you'd commonly install on Windows or find in Linux distribution repos) doesn't use those instructions or maybe just detects the features and then uses it.

I don't recall encountering a "Intel-locked" or "AMD-locked" application in more than 20 years of using x86. Ok, maybe ICC, but that one kind of makes sense :-)


Encountering SIGILLs is not super uncommon on heterogeneous academic computer clusters (since -march=native).

But yeah, typically binaries built for redistribution use a reasonably crusty minimum architecture. Reminds me of this discussion for Fedora: https://lists.fedoraproject.org/archives/list/devel@lists.fe...


Audio software usually runs better on Imtel than on AMD.


That doesn't mean compilers will emit such instructions; maybe hand written assembler will become less portable if such code is making use of extensions...but that should be obvious to the authors...and probably they should have a fallback path.


> can you compile for ARM and move the binary around as easily as you can for x86?

Yes.


As I understand it, ARM's new willingness to allow custom op-codes is dependent upon the customer preventing fragmentation of the ARM instruction set.

In theory, your software could run faster, or slower, depending upon Amazon's use of their extensions within their C library, or associated libraries in their software stack.

Maybe the wildest thing that I've heard is Fujitsu not implementing either 32-bit or Thumb on their new supercomputer. Is that a special case?

"But why doesn’t Apple document this and let us use these instructions directly? As mentioned earlier, this is something ARM Ltd. would like to avoid. If custom instructions are widely used it could fragment the ARM ecosystem."

https://medium.com/swlh/apples-m1-secret-coprocessor-6599492...


> Maybe the wildest thing that I've heard is Fujitsu not implementing either 32-bit or Thumb on their new supercomputer. Is that a special case?

What's wild about this? Apple dropped support for 32b (arm and thumb) years ago with A11. Supporting it makes even less sense in an HPC design than it does in a phone CPU.


It's interesting that if you step back and look at what Amazon has been most willing to just blow up and destroy, it is the idea of intellectual property of any kind. It comes out clearly in their business practices. This muscle memory may make it hard for ARM to have a long term stable relationship with a company like ARM.


What do you mean?

Also, I think there's a typo in your last phrase.


> Building server software on Graviton ARM creates a vendor lock-in to Amazon

Amazon already has lock-in. Lambda, SQS, etc. They've already won.

You might be able to steer your org away from this, but Amazon's gravity is strong.


This is kind of what should happen right? I'm not an expert, but my understanding is that one of the takeaways from the M1 success has been the weaknesses of x86 and CISC in general. It seems as if there is a performance ceiling which exists for x86 due to things like memory ordering requirements, and complexity of legacy instructions, which just don't exist for other instruction sets.

My impression is that we have been living under the cruft of x86 because of inertia, and what are mostly historical reasons, and it's mostly a good thing if we move away from it.


M1's success shows how efficient and advanced the TSMC 5 nm node is. Apple's ability to deliver it with decent software integration also deserves some credit. But I wouldn't interpret it as the death knell for x86.


> weaknesses of x86 and CISC in general

"RISC" and "CISC" distinctions are murky, but modern ARM is really a CISC design these days. ARM is not at all in a "an instruction only does one simple thing, period" mode of operation anymore. It's grown instructions like "FJCVTZS", "AESE", and "SHA256H"

If anything CISC has overwhelmingly and clearly won the debate. RISC is dead & buried, at least in any high-performance product segment (TBD how RISC-V ends up fairing here).

It's largely "just" the lack of variable length instructions that helps the M1 fly (M1 under Rosetta 2 runs with the same x86 memory model, after all, and is still quite fast).


Most RISCs would fail the "instruction only does one thing" test. ISTR there were instructions substantially more complex than FJCVTZS in the PowerPC ISA.

I think it's time for a Mashey CISC vs RISC repost:

https://www.yarchive.net/comp/risc_definition.html


RISC vs CISC isn't really about instructions doing "one simple thing period."

It's about increased orthogonality between ALU and memory operations, making it simpler and more predictable in an out-of-order superscalar design to decode instructions, properly track data dependencies, issue them to independent execution units, and to stitch the results back into something that complies with the memory model before committing to memory.

Having a few crazy-ass instructions which either offload to a specialized co-processor or get implemented as specialized microcode for compatibility once you realize that the co-processor is more trouble than it's worth doesn't affect this very much.

What ARM lacks are the huge variety of different instruction formats and addressing mode that Intel has; which substantially affect the size and complexity of the instruction decoder, and I'm willing to bet that creates a significant bottleneck on how large of a dispatch and reorder system they can have.

For a long time, Intel was able to make up this difference with process dominance, clever speculative execution tricks, and throwing a lot of silicon and energy at it which you can do on the server side where power and space are abundant.

But Intel is clearly losing the process dominance edge. Intel ceded the mobile race a long time ago. Power is becoming more important in the data center, which are struggling to keep up with providing reliable power and cooling to increasingly power-hungry machines. And Intel's speculative execution smarts came back to bite them in the big market they were winning in, the cloud, when it turned out that they could cause information leaks between multiple tenants, leading to them needing to disable a lot of them and lose some of their architectural performance edge.

And meanwhile, software has been catching up with the newer multi-threaded world. 10-15 years ago, dominance on single threaded workloads still paid off considerably, because workloads that could take advantage of multiple cores with fine-grained parallelism were fairly rare. But systems and applications have been catching up; the C11/C++11 memory model make it significantly more feasible to write portable lock-free concurrent code. Go, Rust, and Swift bring safer and easier parallelism for application authors, and I'm sure the .net and Java runtimes have seen improvements as well.

These increasingly parallel workloads are likely another reason that the more complex front-ends needed for Intel's instruction set, as well as their stricter memory ordering, are becoming increasingly problematic; it's becoming increasingly hard to fit more cores and threads into the same area, thermal, and power envelopes. Sure, they can do it on big power hungry server processors, but they've been missing out on all of the growth in mobile and embedded processors, which are now starting to scale up into laptops, desktops, and server workloads.

I should also say that I don't think this is the end of the road for Intel and x86. They have clearly had a number of setbacks of the last few years, but they've managed to survive and thrive through a number of issues before, and they have a lot of capital and market share. They have squeezed more life out of the x86 instruction set than I thought possible, and I wouldn't be shocked if they managed to keep doing that; they realized that their Itanium investment was a bust and were able to pivot to x86-64 and dominate there. They are facing a lot of challenges right now, and there's more opportunity than ever for other entrants to upset them, but they also have enough resources and talent that if they focus, they can probably come back and dominate for another few decades. It may be rough for a few years as they try to turn a very large boat, but I think it's possible.


> I'm willing to bet that creates a significant bottleneck on how large of a dispatch and reorder system they can have

My understanding is the reorder buffer of the m1 is particularly large:

"A +-630 deep ROB is an immensely huge out-of-order window for Apple’s new core, as it vastly outclasses any other design in the industry. Intel’s Sunny Cove and Willow Cove cores are the second-most “deep” OOO designs out there with a 352 ROB structure, while AMD’s newest Zen3 core makes due with 256 entries, and recent Arm designs such as the Cortex-X1 feature a 224 structure."

https://www.anandtech.com/show/16226/apple-silicon-m1-a14-de...


> These increasingly parallel workloads are likely another reason that the more complex front-ends needed for Intel's instruction set, as well as their stricter memory ordering, are becoming increasingly problematic; it's becoming increasingly hard to fit more cores and threads into the same area, thermal, and power envelopes. Sure, they can do it on big power hungry server processors, but they've been missing out on all of the growth in mobile and embedded processors, which are now starting to scale up into laptops, desktops, and server workloads.

Except ARM CPUs aren't any more parallel in comparable power envelopes than x86 CPUs are, and x86 doesn't seem to have any issue hitting large CPU core counts, either. Most consumer software doesn't scale worth a damn, though. Particularly ~every web app which can't scale past 2 cores if it can even scale past 1.


Parallelism isn't a good idea when scaling down, nor is concurrency often. Going faster is still a good idea on phones (running the CPU at higher speed uses less battery because it can turn off faster) but counting background services there is typically less than one core free, there is overhead to threading and asyncing, and your program will go faster if you take most of it out.


Isn't most of M1's performance success due to being a SoC / increasing component locality/bandwidth? I think ARM vs x86 performance on its own isn't a disadvantage. Instead the disadvantages are a bigger competitive landscape (due to licensing and simplicity), growing performance parity, and SoCs arguable being contrary to x86 producers' business models.


ARM instructions are also much easier to decode than x86 instructions which allowed the M1 designers to have more instruction decoders and this, IIRC, is one of the important contributors to the M1's high performance.


Umm, Intel laptop chips are SoC with onchip graphics, pci4, wifi, usb4, and thunderbolt4 controller, connectivity direct to many audio codec channels, plus some other functionality for dsp and encryption.


There isn't any performance ceiling issue. Intel ISA operates at a very slight penalty in terms of achievable performance per watt, but nothing in an absolute sense.

I would argue it isn't time for Intel to switch until we see a little more of the future as process nodes may shrink at a slower rate. Will we have hundreds of cores? Field programmable cores? More fixed function hardware on chip, or less? How will high-bandwidth high-latency gddr style memory mix with lower-latency lower-bandwidth ddr memory? Will there be on die memory like hbm for cpus?



On the flip side that post illustrates just how things can go wrong, too: Windows RT was a flop.


Precisely for the reasons he gave though. It wasn't a unified experience. RT had lackluster support, no compatibility and a stripped down experience.

They're trying to fix it with Windows on ARM now, but that's what people were asking for back then.


It is more a stance to show Microsoft is ready to put Windows on arm CPUs if x86 loses the market.


I can see this happening for things that run in entirely managed environments but I don't think AWS can make the switch fully until that exact hardware is on people's benches. Doing microbenchmarking is quite awkward on the cloud, whereas anyone with a Linux laptop from the last 20 years can access PMCs for their hardware


Very little user code generates binaries that can _tell_ it is running on non-x86 hardware. Rust is Arm Memory Model safe, existing C/C++ code that targets the x86 memory model is slowly getting ported over, but unless you are writing multithreaded C++ code that cuts corners it isn't an issue.

Running on the JVM, Ruby, Python, Go, Dlang, Swift, Julia or Rust and you won't notice a difference. It will be sooner than you think.


It's not the memory model I'm thinking of but the cache design, ROB size etc.

Obviously this is fairly niche but the friction to making something fast is hugely easier locally.


The vast majority of developers never profile their code. I think this is much less of an issue than anyone on HN would rank it. Only when the platform itself provides traces do they take it into consideration. And even then, I think most perf optimization is in a category of don't do the obviously slow thing, or the accidentally n^2 thing.

I partially agree with you though, as the penetration of Arm goes deeper into the programmer ecosystem, any mental roadblocks about deploying to Arm will disappear. It is a mindset issue, not a technical one.

In the 80s and 90s there were lots of alternative architectures and it wasn't a big deal, granted the software stacks were much much smaller and more metal. Now they are huge, but more abstract and farther away from machine issues.


"The vast majority of developers never profile their code."

Protip: New on the job and want to establish a reputation quickly? Find the most common path and fire a profiler at it as early as you can. The odds that there's some trivial win that will accelerate the code by huge amounts is fairly decent.

Another bit of evidence developers rarely profile their code is that I can tell my mental model of how expensive some server process will be to run and most other developer's mental models tend to differ by at least an order of magnitude. I've had multiple conversations about the services I provide and people asking me what my hardware is, expecting it to be run on some monster boxes or something when I tell them it's really just two t3.mediums, which mostly do nothing, and I only have two for redundancy. And it's not like I go profile crazy... I really just do some spot checks on hot-path code. By no means am I doing anything amazing. It's just... as you write more code, the odds that you accidentally write something that performs stupidly badly goes up steadily, even if you're trying not to.


> Find the most common path and fire a profiler at it as early as you can. The odds that there's some trivial win that will accelerate the code by huge amounts is fairly decent.

I've found that a profiler isn't even needed to find significant wins in most codebases. Simple inspection of the code and removal of obviously slow or inefficient code paths can often lead to huge performance gains.


I mean I love finding those "obvious" improvements too but how do you know you've succeeded without profiling it? ;)


Every piece of code I’ve looked at in my current job is filled with transformations back and forth between representations.

It’s so painful to behold.

Binary formats converted to JSON blobs, each bigger than my first hard drive (!), and then back again, often multiple times in the same process.


This isn't really about you or me but the libraries that work behind the spaghetti people fling into the cloud.


Yes, and just like Intel & AMD spent a lot of effort/funding for building performance libraries and compilers, we should expect Amazon and Apple invest into similar efforts.

Apple will definitely give all the necessary tools as part of Xcode for iOS/MacOS software optimisation.

AWS is going to be more interesting – this is a great opportunity for them to provide distributed profiling/tracing tools (as a hosted service, obviously) for Linux that run across a fleet of Graviton instances and help you do fleet-wide profile guided optimizations.

We should also see a lot of private companies building high-performance services on AWS to contribute to highly optimized open-source libraries being ported to graviton.


So far I found getting started repo for Graviton with few pointers https://github.com/aws/aws-graviton-getting-started


What kind of pointers were you expecting?

I found it to have quite a lot of useful pointers. Specifically –https://static.docs.arm.com/swog309707/a/Arm_Neoverse_N1_Sof...

https://static.docs.arm.com/ddi0487/ea/DDI0487E_a_armv8_arm....

- these two docs gives lot of useful information.

And the repo itself contain a number of examples (like ffmpeg) that have been optimized based on these manuals.


given a well designed chip which achieves competitive performance across most benchmarks, Most code will run sufficiently well for most use cases regardless of the nuance of specific cache design and sizes.

There is certainly an exception to this for chips with radically different designs and layouts, as well as folks writing very low-level performance sensitive code which can benefit from specific platform optimization ( graphics comes to mind ).

However even in the latter case, I'd imagine the platform specific and fallback platform agnostic code will be within 10-50% performance of each other. Meaning a particularly well designed chip could make the platform agnostic code cheaper on either a raw performance basis or cost/performance basis.


If you use a VM language like Java, Ruby, etc, that work is largely abstracted.


True, though the work/fixes sometimes take a while to flow down. One example: https://bugs.openjdk.java.net/browse/JDK-8255351


I honestly don’t know why you put Go or the JVM in this list. It isn’t that the language used properly has sane semantics in multithreaded code, it’s that generations of improper multithreaded code have appeared to work because the x86 memory semantics have covered up an unexpressed dependency that should have been considered incorrect.


I would think the number of developers that have “that exact hardware” on their bench is extremely small (does AWS even tell you what cpu you get?)

What fraction of products deployed to the cloud even has its developers seen doing _any_ microbenchmarking?


Professional laptops don’t last that long, and a lot of developers are given MBPs for their work. I personally expect that I’ll get a M1 laptop from my employer within the next 2 years. At that point the pressure to migrate from x86 to ARM will start to increase.


You miss my point - if I am seriously optimizing something I need to be on the same chip not the same ISA.

Graviton2 is a Neoverse core from Arm and it's totally separate from M1.

Besides, Apple don't let you play with PMCs easily and I'm assuming they won't be publishing any event tables any time soon so unless they get reverse engineered you'll have to do it through xcode.


Yes, the m1 isn’t a graviton 2. But then again the mobile i7 in my current MBP isn’t the same as the Xeon processors my code runs on in production. This isn’t about serious optimization, but rather the ability for a developer to reasonably estimate how well their code will work in prod (e.g. “will it deadlock”). The closer your laptop gets to prod, the narrower the error bars get, but they’ll never go to zero.

And keep in mind this is about reducing the incentive to switch to a chip that’s cheaper per compute unit in the cloud. If Graviton 2 was more expensive or just equal in price to x86, I doubt that M1 laptops alone would be enough to incentivize a switch.


That's true but the Xeon cores are much easier to compare and correlate because of the aforementioned access to well defined and supported performance counters rather than Apple's holier than thou approach to developers outside the castle.


We have MBPs on our desks but our cloud are Centos Xeon machines. The problems I run into are not squeezing every last ms of performance, since it's vastly cheaper to just add more instances. The problems I care about is that some script I wrote suddenly doesn't work in production because of BSDisms, or Python incompatibilities, or old packages in brew, etc. Would be nice if Apple waved a magic wand and replaced its BSD subsystem with Centos* but I won't be holding my breath :)

* yes I know Centos is done, substitute as needed.


I just wish my employer would let me work on a Linux PC rather than a MBP, then I wouldn't have this mismatch between my machine and server...


I think this is a slightly different point from the other responses, but this not true: if I am seriously optimizing something I need ssh access to the same chip.

I don't run my production profiles on my laptop - why would I expect to compare how my i5 or i7 chip on a thermally limited MBP to how my 64 core server performs?

It's convenient for debugging to have the same instruction set (for some people, who run locally), but for profiling it doesn't matter at all.


I profile in valgrind :/


This is typical Hacker News. Yes, some people "seriously optimize" but the vast majority of software written is not heavily optimized nor is it written at companies with good engineering culture.

Most code is worked on until it'll pass QA then thrown over the wall. For that majority of people, an M1 is definitely close enough to a graviton.


> typical hacker news

Let me have my fun!


Instruments exposes a fair number of counters, though–what's wrong with using it?


I actually recommend just using 'spindump' and reading the output in a text editor. If you just want to look through a callstack adding pretty much any UI just confuses things.


I am currently working on a native UI to visualize spindumps :(


Well try not to get the user lost in opening and closing all those call stack outline views, I'd rather just scroll in BBEdit ;)


It’s outline views, but I’ll see if I can keep an option to scroll through text too. (Personally, a major reason why I made this was I didn’t want to scroll through text like Activity Monitor does…)


I don't think it takes "exact" hardware. It takes ARM64, which M1 delivers. I already have a test M1 machine with Linux running in a Parallels (tech preview) VM and it works great.


While I generally agree with this sentiment a lot of people don't realize how much enterprise supply chain / product chain vastly varies from the consumer equivalent. Huge customers that buy intel chips at datacenter scale are pandered to and treated like royalty by both intel and amd. Companies are courted in the earliest stages of cutting edge technical development and product development and given rates so low (granted for huge volume) that most consumers would not even believe. The fact that companies like Serve The Home exist proves this - for those who don't know, the realy business model of Serve The Home is to give enterprise clients the ability to play around with a whole data center of leading edge tech, Serve The Home is simply a marketing "edge api" of sorts for the operation. Sure it might look like intel isn't "competitive" but many of the intel V amd flame wars in the server space for un released tech have already had their bidding wars settled years ago for this very tech.

One thing to also consider is why amazon hugely prioritizes using their "services" and not deploying on bare metal is likely because they can execute their "services" on cheapo arm hardware. Bare metal boxes and VM's give the impression that customer's software will perform in an x86 esque matter. For amazon, the cost of the underlying compute per core is irrelevant since they've already solved the issue of using blazing fast network links to mesh their hardware together - in this way, the ball is heavily in Arm's court for the future of Amazon data centers, although banking and gov clients will likely not move away from X86 any time soon.


I commented [1] on something similar a few days ago,

>Cloud (Intel) isn’t really challenged yet....

AWS are estimated to be ~50% of HyperScalers.

HyperScalers are estimated to be 50% of Server and Cloud Business.

HyperScalers are expanding at a rate faster than other market.

HyperScaler expanding trend are not projected to be slowing down anytime soon.

AWS intends to have all of their own workload and SaaS product running on Graviton / ARM. ( While still providing x86 services to those who needs it )

Google and Microsoft are already gearing up their own ARM offering. Partly confirmed by Marvell's exit of ARM Server.

>The problem is single core Arm performance outside of Apple chips isn’t there.

Cloud computing charges per vCPU. On all current x86 instances, that is one hyper-thread. On AWS Graviton, vCPU = Actual CPU Core. There are plenty of workloads, and large customers like Twitter and Pinterest has tested and shown AWS Graviton 2 vCPU perform better than x86. All while being 30% cheaper. At the end of the day, it is workload / dollars that matters on Cloud computing. And right now in lots of applications Graviton 2 are winning, and in some cases by large margin.

If AWS sell 50% of their services with ARM in 5 years time, that is 25% of Cloud Business Alone. Since it offer a huge competitive advantage Google and Microsoft has no other choice but to join the race. And then there will be enough of a market force for Qualcomm, or may be Marvell to Fab a commodity ARM Server part for the rest of the market.

Which is why I was extremely worried about Intel. (Half of) The lucrative Server market is basically gone. ( And I haven't factored in AMD yet ) 5 years in Tech hardware is basically 1-2 cycles. And there is nothing on Intel's roadmap that shown they have the chance to compete apart from marketing and sales tactics. Which still goes a long way if I have to be honest, but not sustainable in long term. It is more of a delaying tactics. Along with a CEO that despite trying very hard, had no experience in market and product business. Luckily that is about to change.

Evaluating ARM switch takes time, Software preparation takes time, and more importantly, getting wafer from TSMC takes time as demand from all market are exceeding expectations. But all of them are already in motion, and if these are the kind of response you get from Graviton 2, imagine Graviton 3.

[1] https://news.ycombinator.com/item?id=25808856


>Which is why I was extremely worried about Intel. (Half of) The lucrative Server market is basically gone.

Right. I suspect in time we'll look back to this time, and realize that it was already too late for Intel to right the ship, despite ARM having a tiny share of PC and server sales.

Their PC business is in grave danger as well. Within a few years, we're going to see ARM-powered Windows PCs that are competitive with Intel's offerings in several metrics, but most critically, in power efficiency.

These ARM PCs will have tiny market share (<5%) for the first few years, because the manufacturing capacity to supplant Intel simply does not exist. But despite their small marketshare, these ARM PCs will have a devastating impact on Intel's future.

Assuming these ARM PCs can emulate x86 with sufficient performance (as Apple does with Rosetta), consumers and OEMs will realize that ARM PCs work just as well as x86 Intel PCs. At that point, the x86 "moat" will have been broken, and we'll see ARM PCs grow in market share in lockstep with the improvements in ARM manufacturing capacity (TSMC, etc...).

Intel is in a downward spiral, and I've seen no indication that they know how to solve it. Their best "plan" appears to be to just hope that their manufacturing issues get sorted out quickly enough that they can right the ship. But given their track record, nobody would bet on that happening. Intel better pray that Windows x86 emulation is garbage.

Intel does not have the luxury of time to sort out their issues. They need more competitive products to fend off ARM, today. Within a year or two, ARM will have a tiny but critical foothold in the PC and server market that will crack open the x86 moat, and invite ever increasing competition from ARM.


I think the irony few would have predicted is that Apple switching to Intel started all of this.

The effort to switch over from PPC and the payoff of doing so was still a recent memory when the iPhone came out, and so they partly pivoted again to ARM, and then smart phones ate the laptop and desktop business, increasing the overall base of non-x86 competence in the world. If Apple had not walked through that door, someone else would have, but Apple has a customer relationship that gives them some liberties that others don’t necessarily enjoy.


As long as Intel is willing to accept margin will never be as good they once were. I think there are lots of things they could still do.

The previous two CEO choose profits margin. And hopefully we have enough evidence today that was the wrong choice for the companies long term survival.

It is very rare CEO do anything radical. It is something I learned and observe the difference between a founder and a CEO. But Patrick Gelsinger is the closest thing to that.


I guess I don't understand why the M1 makes developing on Graviton easier. It doesn't make Android or Windows ARM dev any easier.

I guess the idea is to run a Linux flavor that supports both the M1 and Graviton on the macs and hope any native work is compatible?


It's not hope; ARM64 is compatible with ARM64 by definition. The same binaries can be used in development and production.

Windows ARM development (in a VM) should be much faster on an M1 Mac than on an x86 computer since no emulation is needed.


>It's not hope; ARM64 is compatible with ARM64 by definition

Linux, mac or windows ARM64 binaries are not cross compatible by definition, thus my question. Is everyone excited to run a Graviton supported Linux distro on these M1s or is there something else?

I would also be surprised if every M1 graphics feature was fully supported on these Amazon chips.

If we're talking about cross compatibility then we can't use any binaries compiled for any M1 specific features either...So no. Its not compatible by definition.


dev in a linux vm/container on your M1 macbook, then deploy to a graviton instance.


Aren't most of us already programming against a virtual machine, such as Node, .NET or the JVM? I think the CPU architecture hardly matters today.


Many people do code against some sort of VM, but there are still people writing code in C/C++/Rust/Go/&c that gets compiled to machine code and run directly.

Also, even if you're running against a VM, your VM is running on an ISA, so performance differences between them are still relevant to your code's performance.


C, C++, Rust, & Go compile to an abstract machine, instead. It is quite hard these days to get it to do something different between x86, ARM, and Power, except relying on memory model features not guaranteed on the latter two; and on M1 the memory model apes x86's. Given a compatible memory model (which, NB, ARM has not had until M1) compiling for the target is trivial.

The x86 memory model makes it increasingly hard to scale performance to more cores. That has not held up AMD much, mainly because people don't scale things out that don't perform well when they do, and use a GPU when that does better. In principle it has to break at some point, but that has been said for a long time. It is indefinitely hard to port code developed on x86 to a more relaxed memory model, so the overwhelming majority of such codes will never be ported.


Note that M1 only uses TSO for Rosetta; ARM code runs with the ARM weak memory model.


> It is indefinitely hard to port code developed on x86 to a more relaxed memory model, so the overwhelming majority of such codes will never be ported.

Most code should just work, maybe with some tsan testing. There's other ways to test for nondeterminism e.g. sleep the different threads randomly.

It helps if you have a real end to end testsuite; for some reason all the developers I've met lately think unit tests are the only kind of tests.


> which, NB, ARM has not had until M1

This isn't true at all: other ARM cores have gone all the way to implement full sequential consistency. Plus, the ARM ISA itself is "good enough" to do efficient x86 memory model emulation as part of an extension to Armv8.3-A.


Having worked some on maintaining a stack on both Intel and ARM, it matters less than it did, but it's not a NOOP. e.g. Node packages with native modules are often not available prebuilt for ARM, and then the build fails due to ... <after 2 days debugging C++ compilation errors, you might know>.


If it can emulate x86, is there really a motivation for developers to switch to ARM? (I don't have an M1 and don't really know what it's like to compile stuff and deploy it to "the cloud.")


Emulation is no way to estimate performance.


Sure, but as a counter example Docker performance on Mac has historically been abysmal[0][1], but everyone on Mac I know still develops using it. We ignore the performance hit on dev machines, knowing it won't affect prod (Linux servers).

I don't see why this pattern would fail to hold, but am open to new perspectives.

[0] https://dev.to/ericnograles/why-is-docker-on-macos-so-much-w...

[1] https://www.reddit.com/r/docker/comments/bh8rpf/docker_perfo...


The VM that uses takes advantage of hardware-accelerated virtualization, for running amd64 VMs on amd64 CPUs. You don't have hardware-accelerated virtualization for amd64 VMs on any ARM CPUs I know of...


Abysmal Docker performance on non-Linux is mainly because of filesystem isn't native, not by CPU virtualization.


How much does arch matter if you're targeting AWS? Aren't the differences between local service instances vs instances running in the cloud a much bigger problem for development?


Yeah and I assume we are going to see Graviton/Amazon linux based notebooks any day now.


Honestly, if Amazon spun this right and they came pre-setup for development and distribution and had all the right little specs (13 and 16 inch sizes, HiDPI matte displays, long battery life, solid keyboard, macbook-like trackpad) they could really hammer the backend dev market. Bonus points if they came with some sort of crazy assistance logic like each machine getting a pre-setup AWS Windows server for streaming windows X86 apps.


> could really hammer the backend dev market

That's worth, what, a few thousand unit sales?


If they could get it to $600-800 and have an option for Windows, decent trackpad/keyboard, you could sell them to students just as well. Shoot, if the DE for Amazon Linux was user friendly enough they wouldn’t even need windows, since half of schools are on GSuite these days.


The point wouldn't be to sell laptops.


Exactly.


Like a Bloomberg machine for devops.


Can't take Graviton seriously until I can run my binaries via Lambda on it.


At that point if it will be trouble for Intel it would be a death sentence for AMD...

Intel has fabs, yes it’s what maybe holding them back atm but it also a big factor in what maintains their value.

If x86 dies and neither Intel nor AMD pivot in time Intel can become a fab company they already offer these services, yes no where near the scale of say TSMC but they have a massive portfolio of fabs and their fabs are located in the west, they also have a massive IP portfolio related to everything form IC design to manufacturing.


> Intel can become a fab company

Not unless they catch up with TSMC in process technology.

Otherwise, they become an uncompetitive foundry.


The point is Intel can't compete as a fab or as a design house.

It's doubtful if Intel would have been able to design an equivalent to the M1, even with access to TSMC's 5nm process and an ARM license.

Which suggests there's no point in throwing money at Intel because the management culture ("management debt") itself is no longer competitive.

It would take a genius CEO to fix this, and it's not obvious that CEO exists anywhere in the industry.


I don't know how you can predict the future like this. Yes, intel greedily choose not to participate in the phone soc market and are paying the price.

But their choice not to invest in EUV early doesn't mean that they will never catch up. They still have plenty of cash, and presumably if they woke up and decided to, they wouldn't be any worse off than Samsung. And definitely better off than SMIC.

Similarly, plenty of smart microarch people work at intel, freeing them to create a design competitive with zen3 or the m1 is entirely possible. Given amd is still on 7nm, and are just a couple percent off of the M1 seems to indicate that if nothing else intel could be there too.

But as you point out Intel's failings are 100% bad mgmt at this point. Its hard to believe they can't hire or unleash whats needed to move themselves forward. But at the moment they seem to be very "IBM" in their moves, but one has to believe that a good CEO with a good engineering background can cut the mgmt bullcrap and get back to basics. They fundamentally just have a single product to worry about unlike IBM.


AMD looked just as bad not so long ago.


Plus even though Intel has been super fat for 3 decades or so, everyone has predicted their death for at least another 3 decades (during their switch from memory to CPUs and then afterwards when RISCs were going to take over the world).

So they do have a bit of history with overcoming these predictions. We'll just have to see if they became too rusty to turn the ship around.


AMD looked far worse... if Intel is “dying” with a yearly revenue of ~$70B or so AMD should’ve been bankrupt 10 times already.

Intel is managing to compete in per core performance while being essentially 2-3 nodes behind, and generating times the revenue of their competitor.

Zen is awesome and we need more competition but Intel isn’t nearly as far behind as it was during the P4 days and it’s revenue is nearing ATH and it’s business more diversified it ever been, if you exclude the datacenter and client computing groups it still is bringing in more revenue than AMD.


>Not unless they catch up with TSMC in process technology

1. Intel doesn't have to catch up. Intel's 14nm is more than enough for a lot of fabless. Not every chip needs cutting edge node

2. split up Intel foundry into a pure play allowed Intel to build up an ecosystem like TSMC.

3. Intel's 10nm is much denser than TSMC's 7nm. Intel is not too far behind. they just needs to solve the yield problem. split up Intel's design and foundry allowed each group to be more agile and not handcuffed to each other.

in fact Intel Design should licensed out x86 like ARM. why not take best biz model from the current leaders? Intel Design take ARM business model and Intel foundry take TSMC business model.


The ARM business model isn't that profitable. Intel's market cap right now is about 240 billion, 6 times the amount Nvidia is paying for ARM.


>Intel's market cap right now is about 240 billion, 6 times the amount Nvidia is paying for ARM

so what? yahoo was a giant in its heyday. blackberry was the king with its phone. no empire stay on top forever.

Apple/Amazon created its own cpu. ARM killing it in mobile space.

intel is the king right now but with more and more its customers design their own cpu. how long before intel fall?


ARM Ltd. is earning relatively very little from this and there seems to be little reason why would that change in the future. This is why it can’t really survive as an independent company.

If you compare net income instead of mkt. cap Intel is ahead by 70 times (instead of 6) and is relatively undervalued compared to other tech companies.


You don’t have to be a bleeding edge foundry, there are tons of components that cannot be manufactured on bleeding edge nodes nor need too.

Intel can’t compete right now on the bleeding edge node but they outcompete TSMC by essentially every other factor when it comes to manufacturing.


How hard would it be for AMD to make an ARM64 based partly on the IP of the Zen architecture? Seems like AMD could equal or beat M1 if they wanted.


>> Seems like AMD could equal or beat M1 if they wanted.

Sometime around 5? years ago AMD was planning to have an ARM option. You'd get essentially an ARM core in an AMD chip with all the surrounding circuitry. They hyped it so much I wondered if they might go further than just that.

Further? Maybe a core that could run either ISA, or a mix of both core types. I dunno, but they dumped that (or shelved it) to focus on Zen, which saved them. No doubt the idea and capability still exist within the company. I'd like to see them do a RISCV chip compatible with existing boards.


"Seattle". tried it, couldn't meet perf targets, canned it.


They already have that: https://www.amd.com/en/amd-opteron-a1100

Didn't sell very well.


AMD makes great designs, switching to ARM/RISC-V would make them lose value but not kill them.


And Intel doesn’t?


AMD also has a GPU division.


Intel makes more money selling their WiFi chipsets than AMD makes on selling GPUs heck even including consoles...


got a source for that? sounds hard to believe.


Computing and Graphics which includes the Radeon technology group revenue for AMDs last quarter was £1.67B industry estimates are that $1.2-1.3B were from CPU sales.

Intel’s Internet of Things group alone revenue last quarter was $680M and they hit $1B IOTG revenue previously

https://www.statista.com/statistics/1096381/intel-internet-o...


The thing about all of these articles analyzing Intel's problems is that nobody really knows the details of Intel's "problems" because it comes down to just one "problem" that we have no insight into: node size. What failures happened in Intel's engineering/engineering management of its fabs that led to it getting stuck at 14 nm? Only the people in charge of Intel's fabs know exactly what went wrong, and to my knowledge they're not talking. If Intel had kept chugging along and got down to 10 nm years ago when they first said they would, and then 7 nm by now, it wouldn't have any of these other problems. And we don't know exactly why that didn't happen.


Intel's problem was that they were slow getting their 10nm design online. That's no longer the case. Intel's new problem is much bigger than that at this point.

Until fairly recently, Intel had a clear competitive advantage: Their near monopoly on server and desktop CPUs. Recent events have illustrated that the industry is ready to move away from Intel entirely. Apple's M1 is certainly the most conspicuous example, but Microsoft is pushing that way (a bit slower), Amazon is already pushing their own server architecture and this is only going to accelerate.

Even if Intel can get their 7nm processes on line this year, Apple is gone, Amazon is gone, and more will follow. If Qualcomm is able to bring their new CPUs online from their recent acquisition, that's going to add another high performance desktop/ server ready CPU to the market.

Intel has done well so far because they can charge a pretty big premium as the premier x86 vendor. The days when x86 commands a price premium are quickly coming to and end. Even if Intel fixes their process, their ability to charge a premium for chips is fading fast.


We actually have a lot of insight in that Intel still doesn't have a good grasp on the problem. Their 10nm was supposed to enter volume production in mid 2018, and they still haven't truly entered volume production today. Additionally Intel announced in July 2020 that their 7nm is delayed by at least a year which means they figured out their node delay problem.


> We actually have a lot of insight in that Intel still doesn't have a good grasp on the problem. Their 10nm was supposed to enter volume production in mid 2018, and they still haven't truly entered volume production today. Additionally Intel announced in July 2020 that their 7nm is delayed by at least a year which means they figured out their node delay problem.

Knowing something happened is not the same as knowing "why" it happened. That's the point of my comment. We don't know why they were not able to achieve volume production on 10 nm earlier.


I'll also add that it's fascinating that both 10 nm and 7 nm are having issues.

My understanding (and please correct me if I'm wrong), is that the development of manufacturing capabilities for any given node is an independent process. It's like building two houses: the construction of the second house isn't dependent on the construction of the first. Likewise, the development of 7 nm isn't dependent on the perfection of 10 nm.

This perhaps suggests that there is a deep institutional problem at Intel, impacting multiple manufacturing processes. That is something more significant that a big manufacturing problem holding up the development of one node.


I think that's not quite right. While it's true that for each node they build different manufacturing lines, generating the required know-how is an iterative/evolutionary process in the same way that process node technology usually builds on the proven tech of the previous node.


SemiAccurate has written a lot about the reasons, for me the essence from that was: complacency, unrealistic goals, they didn't have a plan B in case schedule slips.


I think it's just a difficult problem. Intel is trying to do 10 nm without EUV. TSMC never solved that problem because they switched to EUV at that node size.


Why do they not want to use EUV?


Wild speculations: "newness" budget for 10nm was already used up by other innovations. Or they earmarked all EUV resources for 7nm or 5nm. EUV steppers don't exactly grow on trees.


A key issue is volume. Intel is doing many times less volume than the mobile chipmakers. So intel cant spend as much to solve the problem.

It's a bad strategic position to be in, and I agree with Ben's suggestions as one of the only ways out of it.


The point of my comment is that Intel doesn't know either and that's a bigger problem.


Wasn’t the issue that the whole industry did a joint venture, but Intel decided to go it alone?

I worked at a site (in a unrelated industry) where there was a lot of collaborative semiconductor stuff going on, and the only logo “missing” was Intel.


Didn't Samsung also go it alone, or am I mistaken?


Samsung is the opposite of Intel: gaining market as mobile takes over in the collapse of Intel's former moat. They have more money to solve their problems.


I think it's pretty clear from the article what happened. They didn't have the capital (stemming from a lack of foresight and incentives) to invest in these fabs, relative to their competition.

If you look at this from an engineering standpoint, I think you'll miss the forest for the trees. From a business and strategy standpoint, this was classic case of disruption. Dominant player, Intel, was making tons of money on x86 and missed mobile opportunity. TSMC and Samsung seized on the opportunity to manufacture these chips when Intel wouldn't. As a result, they had more money to build/invest in research to build better fabs, which could be funded by the many customers buying mobile chips. Intel, being the only customer of their fabs, would only have money to improve their fabs if they sold more x86 chips (which were stagnating). By this time, it was too late.


I found the geopolitical portion to be the most important aspect here. China has shown a willingness to flex its muscles on enforcing its values beyond their borders. China is smart, and plays a long game. We don't want to wake up one day and find they've flexed their muscles on their regional neighbors similar to their rare earths strong-arming from 2010-2014 and not have fab capabilities to fall back on in the West.

(For that matter, I'm astounded that after 2014 the status quo returned on rare earths with very little state-level strategy or subsidy to address the risk there.)


Ben missed an important part of the geopolitical difference between TSMC and Intel: Taiwan is much more invested in TSMC's success than America is in Intel's.

Taiwan's share of the semiconductor industry is 66% and TSMC is the leader of that industry. Semiconductors helps keep Taiwan from China's encroachment because it buys them protection from allies like the US and Europe, whose economies heavily rely on them.

To Taiwan, semiconductor leadership is an existential question. To America, semiconductors are just business.

This means Taiwan is also likely to do more politically to keep TSMC competitive, much like Korea with Samsung.


Taiwan nor TSMC cannot produce the key tool to make this all work: The photolithography device itself.

Only ASML currently has that technology.

And it turns out, the photolithography device isn’t really a plug and play device. It’s very fussy. It breaks often. And it requires an army of engineers (as cheap as possible), to man the devices, and to produce the required yield, in order to make the whole operation profitable.

This is the Achilles’ Heel of the whole operation.

I suspect that China is researching and producing their own photolithography devices, independent of American, or western technology. And when they crack it, then they will recapture the entire Chinese market for themselves. And TSMC will become irrelevant to any strategic or tactical plans for them.


> Semiconductors helps keep Taiwan from China's encroachment because it buys them protection from allies like the US and Europe, whose economies heavily rely on them.

Are there any signed agreements that would enforce this? If China one day suddenly decides to take Taiwan, would the US or Europe step in with military forces?


The closest I've found is this: https://en.wikipedia.org/wiki/Taiwan_Relations_Act

Not guaranteed "mutual defense" of any sort, but the US at least has committed itself to helping Taiwan protect itself with military aid. The section on "Military provisions" is probably most helpful.


China's GDP is projected to surpass the US GDP in 2026 [1]. After that it won't be long until the Chinese defense spending will surpass the US one. And after that, it won't be long until the US and its allies will realize it will be healthy for them to mind their own business when China takes over Taiwan.

[1] https://fortune.com/2021/01/18/chinas-2020-gdp-world-no-1-ec...


There are no official agreements since neither US nor any major European countries recognize Taiwan/ROC but US has declared multiple times that they would defend Taiwan (see ‘ Taiwan Relations Act’ & Six Assurances’)


Not an agreement, but the US stance towards the defense of Taiwan (ROC) was recently declassified early: https://www.realcleardefense.com/articles/2021/01/15/declass...


https://en.wikipedia.org/wiki/Taiwan_Relations_Act

> The Taiwan Relations Act does not guarantee the USA will intervene militarily if the PRC attacks or invades Taiwan


It would not be wise to commit to intervene in all circumstances. Similarly, also the NATO treaties do not specify in detail how the allies have to react in case of an attack.


>I'm astounded

Our political system and over financialized economy seem to suffer from same hyper short term focus that many corporations chasing quarterly returns run in to. No long term planning or focus, and perpetual "election season" thrashing one way or another while nothing is followed through with.

Plus, in 2, 4 or 8 years many of the leaders are gone and making money in lobbying or corporate positions. No possibly short-term-painful but long term beneficial policy gets enacted, etc.

And many still uphold our "values" and our system as the ideal, and question any that would look towards the Chinese model as providing something to learn from. So, I anticipate this trend will continue.


It appears the Republicans are all-in on the anti-China bandwagon. Now you just have to convince the Democrats.

I don't think this will be hard. Anyone with a brain looking at the situation realizes we're setting ourselves up for a bleak future by continuing the present course.

The globalists can focus on elevating our international partners to distribute manufacturing: Vietnam, Mexico, Africa.

The nationalists can focus on domestic jobs programs and factories. Eventually it will become clear that we're going to staff them up with immigrant workers and provide a path to citizenship. We need a larger population of workers anyway.


My impression was that Republicans were only half-hearted about China as that issue made it's way through President Trump's administration. The general tone I felt was that things like tariffs were tolerated in support of their party's leader, not the tariffs themselves. And the backtracking on sanctions on specific Chinese firms indicated there was little/no significant GOP support pushing President Trump to follow through. The requirement that TikTok sell off its US operations was watered down into a nice lucrative contract for Oracle, though all that's in limbo and the whole issue has lost steam, its fate possibly resting in the courts, or with a new administration that will be dealing with many larger issues.

The molehill -> mountain issue of Hunter Biden's association with a Chinese private equity fund will raise lots of loud rhetoric, but more for partisan in-fighting than action against China.

Meanwhile the US, the West, Corporations will pay lip service to decrying human rights violations and labor conditions. China will accept this as the need to save face, while any stronger action will be avoided to prevent China from flexing its economic muscles against the corporations or countries that rely on their exports. No company wants to be the next hotel chain forced to temporarily take down their website & issue an embarrassing apology. No country wants to be the next Japan, cut off from rare earth exports.

Just look at Hong Kong: Sure the US has receded from such issues in the last 4 years, but it's not like any other country did anything more than express their displeasure in various diplomatically acceptable ways.


Hong Kong was a lost cause to begin with. With China having full sovereignity over Hong Kong and the Sino-British Joint Declaration being useless (not enforceable in practice and not even violated, at least on paper), the West could do little more about Hong Kong than about Xinjiang or Tibet.


Trump was all in on the anti-China bandwagon. The traditional Republicans were just tolerating Trump long enough to get their agenda passed - conservative judges and tax cuts. Republicans traditionally have been about free trade.


> [...] and not have fab capabilities to fall back on in the West.

I'm not too concerned:

- There are still a number of foundries in western countries that produce chips which are good enough for "military equipment".

- Companies like TSMC are reliant on imports of specialized chemicals and tools mostly from Japan/USA/Europe.

- Any move from China against Taiwan would likely be followed by significant emigration/"brain drain".


National security doesn't just extend to direct military applications. Pretty much every industry and piece of critical infrastructure comes into play here. It won't matter if western fabs can produce something "good enough" if every piece of technological infrastructure from the past 5 years was built with something better.

As for moves again at Taiwan, China hasn't given up that prize. Brain drain would be moot if China simply prevented emigration. I view Hong Kong right now as China testing the waters for future actions of that sort.

Happily though I also view TSMC's pending build of a fab in Arizona as exactly that sort of geographical diversification of industrial and human resources necessary. We just need more of it.


>As for moves again at Taiwan, China hasn't given up that prize.

CCP hasn't give up since KMT high-tailed to Taiwan. for more than 40+ yrs American cozy up with the Chinese govrt and doing business with China.

American told Taiwan govrt not to "make trouble" but we all know China is the one who make all the troubles with military threat and flying aircraft over Taiwan, day in and day out.

Taiwan have build up a impressive defensives from buying weapon (US) to develop its own. yes, China can take Taiwan. that's 100% but at what price.

that's what Taiwanese is betting on, China will think twice about invading.


I bet TSMC has a number of bombs planted around the most critical machines, much like Switzerland has bombs planted around most critical tunnels and bridges.

Trying to grab Taiwan with force alone, even if formally successful, would mean losing its crown jewels forever.


The bombs have been removed some years ago in Switzerland as the risk of them going off was deemed greater than the risk of sudden invasion.

Just to nitpick, your point absolutely stands


TSMC is not really that important. It’s currently only useful for the cutting edge of CPUs, and especially for mobile phones, that can get the battery boost from using a more efficient processor.

Military hardware uses CPU technology that’s 10+ years old, of which the Chinese are capable of fabricating themselves on the mainland. The stuff needs to be rugged, and likely, radiation-proofed.

And besides, isn’t it easier for Switzerland to just launch missiles at the bridges, instead of actively maintaining explosive devices on each bridge?


Maybe TSMC is not that important for making advanced weapons. It is still important for billion-dollar markets like cell phones, cloud servers, desktop and laptop computers, etc. For instance, Apple, which happens to somehow rack in money selling computing devices, is very dependent on TSMC, without a viable replacement available now.


>we all know China is the one who make all the troubles with military threat and flying aircraft over Taiwan, day in and day out.

As they are fully well allowed to as Taiwan is their own territory. You and I might disagree but out of the 195 countries on earth the only ones who recognize Taiwan is a country are these few:

Guatemala, Haiti, Honduras, Paraguay, Nicaragua, Belize, Saint Lucia, Saint Vincent And The Grenadines, Marshall Islands, Saint Kitts And Nevis, Palau, Tuvalu, Nauru, Vatican City.

Comparing that to the 139 countries that recognise the State of Palestine (where Israel can still do as they damn well please!) and it is quite easy to see that while some might pretend to care about Taiwan (like the US) all they really care about in the end is the money to be made from trading with/in PRC.

>China will think twice about invading.

PRC doesn't invade. It isn't the US. It gains influence in much more subtle and intelligent ways than bombing people but if they wanted to they could do to Taiwan what Israel does to Palestine and no one would do anything but talk talk talk.


>As they are fully well allowed to as Taiwan is their own territory.

Taiwan is a fully functional country. it doesn't pay taxes to China. its military is not under China. a lot of countries doesn't even have visa requirement for Taiwanese. Taiwanese passport is better than Chinese

>It isn't the US. It gains influence in much more subtle and intelligent ways than bombing people

LOL. China so intelligently gains influences(?). Taiwanese voted DPP's Tsai and gave her the largest votes in the history of Taiwan. Taiwanese have time and time again voted against China "influences"


> out of the 195 countries on earth the only ones who recognize Taiwan is a country are these few:

There would be a lot more if China wasn't so threatening about it.

The moment there is an opening or weakness, much more of the world will jump on board


Taiwan's defenses can hold up against minor probing from China, nothing truly sustained.

The true deterrent to China isn't any treaty agreement to protect Taiwan, which doesn't exist. It's the realpolitik of 30,000 US troops in Taiwan.

Any significant & sustained attack against Taiwan would harm US troops. They'd be little more than a speedbump for China if China acted quickly & in force, but that speedbump would require a significant-- and perhaps disproportionate-- response from the US.


> Taiwan's defenses can hold up against minor probing from China, nothing truly sustained.

It's the same for every small country bordering a potentially hostile much larger neighbor. Nobody excepts the small country to be able to withstand a full-scale invasion. The point is to make that invasion expensive enough to not be worth it.


You're not even wrong. With Taiwan being an island, any prospects of invasion are going to be bloody. Successful marine invasion are surprisingly rare in history. Usually, they are only successful because the defender could not set up a defense, or the attacker committed overwhelming resources. However, Taiwan's strategic position (it guards access to the mainland from the ocean) and the ideological accomplishment of having tied up this loose end might make it worth it. And in the event Taiwanese economy and its importance declines, it could become harder for the US to justify defending it.


The issue isn't just military equipment though. When your entire economy is reliant on electronic chips, it's untenable for all of those chips to come from a geopolitical opponent. That gives them a lot of influence over business and politics without having to impact military equipment.


Yeah, for some reason, I assumed that military equipment mostly used, like, low performance but reliable stuff. In-order processors, real time operating systems, EM-hardening. Probably made by some company like Texas Instruments, who will happily keep selling you the same chip for 30 years.


Well, I doubt the US military is too concerned with chasing the transistor density per cm^2, but there are cutting edge areas of military tech in the form of weapons guidance, fighter jet avionics & weapons systems, etc. that may require more advanced capabilities in components-- I don't know.

(As an aside, when we sell things like fighter jets to other countries, they do not get the same avionics & weapons/targeting systems that we use)


That’s a good comparison... CPUs are increasingly a commodity.


> This is why Intel needs to be split in two. Yes, integrating design and manufacturing was the foundation of Intel’s moat for decades, but that integration has become a straight-jacket for both sides of the business. Intel’s designs are held back by the company’s struggles in manufacturing, while its manufacturing has an incentive problem.

The only comparable data point says that this is a terrible idea. AMD spun out GlobalFoundries after a deep slide in their valuation, and the stock (as well as the company's reputation) remained in the doldrums for several years after that. Chipmaking is a big business and there are many advantages to vertical integration when both sides of the company function appropriately. If you own the fabs and there is a surge in demand (as we see now at the less extreme end of the lithography spectrum), your designs get preferential treatment.

Intel's problem isn't the structure of the company, it's the execution. Swan was not originally intended as the permanent replacement to Krzanich[0], and it's a bit strange to draw conclusions about whether the company can steer away from the rocks when the new captain isn't even going to take the helm until the middle of next month.

People are viewing Intel's suggestion that it may use TSMC's fabs for some products as a negative for Intel, but I just see it as a way to exert pressure on AMD's gross margin by putting some market demand pressure on the extreme end of the lithography spectrum (despite sustained demand in TSMC's HPC segment, TSMC's 7nm+ and 5nm are not the main driver of current semiconductor shortages).

[0] https://www.engadget.com/2019-01-31-intel-gives-interim-ceo-...


>The only comparable data point says that this is a terrible idea.

Huh, I would say completely opposite thing. AMD wouldn't have survived if it kept trying to improve their own process instead of going to TSMC.


The problem here is not the success of AMD after splitting, but the complete retreat of Global Foundries from the SOTA process node. If this happens again with an Intel split then we have only TSMC left, off the coast of mainland China in Taiwan, in the middle of a game of thermonuclear tug of war between the West and China.

While Capitalism will likely be part of the solution, through subsidizes for Intel or some other form, it must take a back seat to preventing the scenario described above from becoming reality. We are on the brink of this happening already with so many people suggesting such a split and ignoring what happened to AMD and GF.

The geopolitical ramifications of completely centralizing the only leading process node in such a sensitive area between the world's super powers cannot be understated.

Full disclosure: I'm a shareholder in Intel, TSMC, and AMD.


Let's just hope that if Intel's position is protected because of its strategic importance in the tug of war, it doesn't become another Boeing.


This is a bizarre comparison. Boeing made an entire line of planes that could randomly dive into the ground, and insisted that there be no additional training required for the uptake of those planes. Intel, in contrast, was over-ambitious with 10nm and didn't wait a few more months to incorporate EUV into that process node. The government hasn't banned the use of Intel chips, but the 737 Max 8 was grounded for 20 months. While the pandemic slammed air travel, it has been a major tailwind for the PC and server markets alike.


besides commercial aviation, Boeing (which swallowed Douglas and McDonnell Aircraft, Rocketdyne, etc.) is the second largest defense contractor in the world. It's too critical to be allowed to fail. Intel and Global Foundries have the only nearly state-of-the-art foundries far from China.


I thought Boeing had many issues pre-covid, not just the 737 Max. Starliner immediately springs to mind.


> The geopolitical ramifications of completely centralizing the only leading process node in such a sensitive area between the world's super powers cannot be understated.

This is a fair point but I think it will be less of a concern once TSMC's 5nm fab in Arizona comes online[1]. Samsung is also in the process of building a 5nm fab in Korea[2]. Geopolitically, Korea may still be a minor concern, but much less so than a possible Chinese invasion of Taiwan.

[1]: https://www.anandtech.com/show/15803/tsmc-build-5nm-fab-in-a...

[2]: https://www.bloomberg.com/news/articles/2020-05-21/samsung-t...


I’m pretty sure the Arizona fab is comparatively very very small, although a step in the right direction.


Yes, it says it in the link they cited. The Arizona fab is expected to yield ~20k vs ~100k at one of the Taiwan based fabs.


The price of creating the fabs for a new node increases exponentially with every node. I remember when there were over 20 top node players. Now there are 3 if you aren't counting Intel out. If AMD had remained in the game there's no way they could have won.


I agree with respect to AMD's situation. I think that was the right decision for them then.

I'm saying that there is a difference between the two situations and there are geopolitical factors at play that mean the answer here is not as simple as splitting Intel into a foundry company and a chip design company, due to what we saw happen to AMD's foundry when they split.

I think it's a bit misleading to say that there are 3 top node players right now. Samsung, TSMC, and Intel, while from a business perspective do compete, from a technical perspective TSMC seems to have a fairly significant lead. Like you said, the price increases dramatically every node. If Intel were to split, why would that new foundry company bother investing a huge amount of money in nodes they can't yet produce at volume? Also, Samsung while close to TSMC in competition at this point, still produces an inferior product. There seems to be solid evidence of this in the power consumption comparison of AMD vs NVIDIA top end cards.[1]

My point being, if Intel were to follow the same road as AMD and split up, we could find ourselves in a situation that while better for Intel's business, would arguably leave the world worse off overall by leaving TSMC as the only viable manufacturer for high end chips.

1. https://www.legitreviews.com/amd-radeon-rx-6900-xt-video-car...


I feel like that disclosure isn't warranted since like two of them are in the S&P 500 and everyone here probably has some exposure to that.


Fair enough.


AMD had to go through that in order to become a competitive business again. Look at them now! Maybe Intel's chip design business needs to go through the same thing.

Maybe there is a way for Intel to open up its fab business to other customers and make it more independent, without splitting it off into another company. However, it seems like that would require a change in direction that goes against decades of company culture. It might be easier to achieve that by actually splitting the fab business off.


Self-immolation is only a path to growth if you're a magical bird -- it's not a reasonable strategy for a healthy public company. AMD went through seven years of pain and humiliation between that spinoff and its 2015 glow-up. I understand that sometimes the optimal solution involves a short-term hit, but you don't just sell your organs on a lark (nor because some finance bros at Third Point said so). There are obvious strategic reasons to remain an IDM, and AMD would never have gone fabless if the company hadn't been in an existential crisis. Intel is nowhere near that kind of crisis; it may have some egg on its face but the company still dominates market share in its core businesses and is making profits hand over fist.

> Maybe there is a way for Intel to open up its fab business to other customers and make it more independent, without splitting it off into another company.

Intel Custom Foundry. They have several years of experience doing exactly what you describe, and that's how their relationship with Altera (which they later acquired) began. I see AMD's subsequent bid for Xilinx as a copycat acquisition that demonstrates one of the competitive advantages of Intel's position as an IDM: information.


> Intel is nowhere near that kind of crisis; it may have some egg on its face but the company still dominates market share in its core businesses and is making profits hand over fist.

Years from now, people looking back will be amazed at how fast that changed. The time to react to disruption is now, when the company still has the ability to do so.


It's easy to make that kind of prediction, and in fact people made the same prediction about AMD in far worse circumstances -- and were still wrong. Semiconductors are extremely important to the world's economy right now, not just in PC and server but all over the tech marketplace.


But look at Global Foundries now. The article does suggest that Intel's spun off fabs would need state funding to survive but is that really tenable for the long term? Is that TSMC's secret thus far?


Global Foundries has stopped at 12nm and I don't see any plans to go beyond that. A notable amount of their fabs are in South Korea anyway so they would fall into the same bucket.


The "US manufacturing is actually stronger than ever" camp used to cook their books by over-weighting Intel profits. Hopefully this will be a wakeup call.


Manufacturing isn't an industry that the U.S. should be interested in currently. Wait until the entire factory can be automated and it will come back. Until then enjoy the cheaper goods provided by globalized labor.


How does Moore's law figure into this? I suspect that TSMC runs into the wall that is quantum physics at around 1-2nm. Considering that TSMC has said that they will be in full production of 3nm in 2022, I can't see 1nm being much beyond 2026-2028. What happens then? Does a stall in die shrinks allow other fabs to catch up?

It appears to me that Intel stalling at 14nm is what opened the door for TSMC and Samsung to catch up. Does the same thing happen in 2028 and allow China to finally catch up?


Modern process node designations (5nm, 3nm...) are not measurements any more, they are marketing terms. The actual measure of shrinking is a lot smaller than the name would mean to indicate, and not approaching the quantum limits as fast as it may seem.


If I recall correctly from my uni days, one of the big challenges with further shrinking the physical gates is that the parasitic capacitance on the gates becomes very hard to control, and the power consumption of the chip is directly related to that capacitance. Of course, nothing is so simple and I'm sure Intel can make some chips at very small process sizes, but at the cost of horrible yield.


The current state of the art seems to be 3D transistors (FinFETs, GAAFETs), which are one possible way to address the capacitance issue, and opens many design possibilities. It leads to other challenges though, for example heat dissipation.


I did not know that! Though, that answer raises its own questions...

If the two are entirely unlinked, what's stopping Intel from slapping "Now 3nm!" on their next gen processors? Surely some components must be at the advertised size, even if it's no longer a clear cut all-or-nothing descriptor, right? What's actually being sized down and why is it seemingly posing so many challenges for Intel's supply chain?


>"Now 3nm!" on their next gen processors?

Nothing.

It started when Samsung were using features size just to gain competitive marketing advantage. And then TSMC had to follow because their customers and shareholders were putting a lot of pressure on them

While ingredient branding is important, at the end of the day the chip has to perform. Otherwise your ingredient branding would suffer and such strategy would no longer work. Samsung are already tasting their own medicine.

P.S - That "Now 3nm!" reminds me of "3D Now!" from AMD.


This article has a pretty good overview of the situation and other metrics that actually track progress: https://spectrum.ieee.org/semiconductors/devices/a-better-wa...


They can call it whatever they want but it will need to show huge performance improvements for anyone to actually care.


There's a nice wiki where you can look up more detailed specs on the processes of each contender, e.g. 5nm: https://en.wikichip.org/wiki/5_nm_lithography_process


Interesting, it says:

Intel's 5-nanometer process node is expected to ramp around the 2023 timeframe.


I think it'll be a good thing when people stop worrying about process node technology and start worrying about performance and power usage.

Intel's 14nm chips are already competitive with AMD's (TSMC's, really) 7nm chips. The i7-11700 or whatever the newest one coming out soon is called, is going to be pretty much exactly on parity with AMD's Ryzen 5000 series.

So if node shrinkage is such a dramatic increase in performance and power usage, then when Intel unfucks themselves and refines their 10nm node and 7nm node and whatever-node after that, they'll clearly be more performant than AMD... and Apple's M1.

Process technology is holding Intel back. They fix that, they get scary again.


Since AMD has introduced its first 7 nm chip, Intel's 14-nm chips have never been competitive.

Intel's 14-nm process has only 1 advantage over any other process node, including Intel's own 10 nm: the highest achievable clock frequency, of up to 5.3 GHz.

This advantage is very important for games, but not for most other purposes.

Since the first 7-nm chip of AMD, their CPUs consume much less power at a given clock frequency than Intel's 14 nm.

Because of this, whenever more cores are active, so that the clock frequency is limited by the total power consumption, the clock frequency of the AMD CPUs is higher than of any Intel CPU with the same number of active cores, which lead to AMD winning any multi-threaded benchmark even with Zen 2, when they still did not have the advantage of a higher IPC than Intel, like they have with Zen 3.

With the latest Intel's 10 nm process variant, Intel has about the same power consumption at a given frequency and the same maximum clock frequency as the TSMC 7 nm proces.

So Intel should have been able to compete now with AMD, except that they still appear to have huge difficulties in making larger chips in sufficient quantities, so they are forced to use workarounds, like the launch of the Tiger Lake H35 series of laptop CPUs with smaller dies, to have something to sell until they will be able to produce the larger 8-core Tiger Lake H CPUs.


"This advantage is very important for games, but not for most other purposes."

I disagree. The majority of desktop applications are only lightly threaded e.g. Adobe products, office suites, Electron apps, anything mostly written before 2008.


Save for heavy lifting in Adobe products those other apps don't meaningfully benefit from higher clock speeds as their operations aren't CPU bound. The high speed Intel chips see an advantage when there's a single CPU bound process maxing out a core. Office and Slack don't tend to do that (well maybe Slack...). Also if you've got multiple processes running full tilt Intel's clock speed advantage goes away because the chip clocks down so as to not melt.

So with a heavy desktop workload with multiple processes or threads the Intel chips aren't doing any better than AMD. It's only in the single heavy worker process situation where Intel's got the advantage and that advantage is only tens of percentage points better than AMD.

So Intel's maximum clock speed isn't the huge advantage it might seem.


You are right that those applications benefit from a higher single-thread performance.

Nevertheless, unlike in competitive games, the few percents of extra clock frequency that Intel previously had in Comet Lake versus Zen 2 and which Intel probably will have again in Rocket Lake versus Zen 3, are not noticeable in office applications or Web browsing, so they are not a reason to choose one vendor or the other.


>Intel's 14nm chips are already competitive with AMD's (TSMC's, really) 7nm chips.

The only metric that Intel's 14nm is better than TSMC's 7nm is clock speed ceiling. Other than that there is nothing competitive from an Intel 14nm chip compares to AMD ( TSMC ) 7nm chip from a processing perspective.

And that is not a fault of TSMC or AMD. They just decide not to pursuit that route.


> I think it'll be a good thing when people stop worrying about process node technology and start worrying about performance and power usage.

I think it's more that people attribute too much significance to process node technology when trying to understand why performance & power are what they are.

For single-core performance the gains from a node shrink are in the low teen percentage increases. Power improvements at the same performance are a bit better, but still not as drastic as people tend to treat it as.

10-20 years ago just having a better process node was a massive deal. These days it's overwhelmingly CPU design & architecture that dictate things like single-core performance. We've been "stuck" at the 3-5ghz range for something like half a decade now and TSMC has worse performance here than Intel's existing 14nm. Still hasn't been a single TSMC 7nm or 5nm part that hits that magical 5ghz mark reliably enough for marketing, for example. And that's all process node performance is - clock speed. M1 only runs at 3.2ghz - you could build that on Intel's 32nm without any issues. Power consumption would be a lot worse, but you could have had "M1-like" single-core performance way back in 2011 if you had a time machine to take back all the single-core CPU design lessons & improvements, that is.


While you are right that due to their design CPUs like Apple M1 can reach the same single-thread performance as Intel/AMD at a much lower clock frequency and such a clock frequency could be reached much earlier, e.g. already Intel Nehalem in 2009 reached 3.3 GHz as turbo, while Sandy Bridge in 2011 had 3.4 GHz as base clock frequency, it would have been impossible to make a CPU like Apple M1 in any earlier technology, not even in Intel's 14 nm.

To achieve its very high IPC, M1 multiplies a lot of internal resources and also uses very large caches. All those require a huge number of transistors.

Implementing an M1-like design in an earlier technology would have required a very large area, resulting in a price so large and also in a power consumption so large that such a design would have been infeasible.

However, you are partially right in the sense that Intel clearly was overconfident due to their clock frequency advantage and they have decided on a roadmap to increase the IPC of their CPUs in the series Skylake => Ice Lake => Alder Lake that was much less ambitious than it should have been.

While Tiger Lake and Ice Lake have about the same IPC, Alder Lake is expected to bring a similar increase like from Skylake to Ice Lake.

Maybe that will be competitive with Zen 4, but it is certain that the IPC of Alder Lake will still be lower than the IPC of Apple M1, so Intel will continue to be able to match the Apple performance only at higher clock frequencies, which cause a higher power consumption.


> To achieve its very high IPC, M1 multiplies a lot of internal resources and also uses very large caches. All those require a huge number of transistors.

Yes & no. Most of the M1 die isn't spent on CPU, it's spent things like GPU, neural net, and SLC cache. A "basic" dual-core CPU-only M1 would be very manufacturable back in 2011 or so. After all, Intel at some point decided to spend a whole lot of transistors adding a GPU to every single CPU regardless of worth, there were transistors to spare.


True, but the M1 CPU, together with the necessary cache and memory controller still occupies about a third of the Apple M1 die.

In the Intel 32-nm process, the area would have been 30 to 40 times larger than in the TSMC 5 nm process.

The 32-nm die would have been as large as a book, many times larger than any manufacturable chip.

By 2011, 2-core CPUs would not have been competitive, but even reducing the area in half is not enough to bring the size into the realm of possible.


Where are you getting your M1 CPU + memory controller is about a third of the M1 die from? Looking at this die shot + annotation: https://images.anandtech.com/doci/16226/M1.png The firestorm cores + 12MB cache is far less than 1/3rd the die, and the memory controller doesn't look particularly large.

The M1 total is 16B transistors. A 2700K on Intel's 32nm was 1.1B transistors. You're "only" talking something like ~4x the size necessary if that. Of course the 2700K already has a memory controller on it, so you really just need the firestorm cores part of the M1. Which is a _lot_ less than 1/3rd of the die size.

But lets say you're right and it is 1/3rd. That means you need ~5B transistors. Nvidia was doing 7B transistors on TSMC's 28nm in 2013 on consumer parts (GTX 780)


I was looking at the same image.

A very large part of the die is not labelled and it must include some blocks that cannot be omitted from the CPU, e.g. the PCIe controller and various parts from the memory controller, e.g. buffers and prefetchers.

The area labelled for the memory channels seems to contain just the physical interfaces for the memory, that is why it is small. The complete memory controller must include parts of the unlabelled area.

Even if the CPU part of M1 would be smaller, e.g. just a quarter, that would be 30 square mm. In the 32 nm technology that would likely need much more than 1000 square mm, i.e. it would be impossible to be manufactured.

The number of transistors claimed for various CPUs or GPUs is mostly meaningless and usually very far from the truth anyway.

The only thing that matters for estimating the costs and the scaling to other processes is the area occupied on the die, which is determined by much more factors than the number of transistors used, even if that would have been reported accurately. (The transistors are not identical, they can have very different sizes and the area of various parts of a CPU may be determined more by the number of interconnections than by the number of transistors.)


> that would be 30 square mm. In the 32 nm technology that would likely need much more than 1000 square mm,

Where are you getting that scaling from? Intel's 32nm is reported at 7.5Mtr/mm2 while TSMC's 5NM is 171Mtr/mm2. 30 sqmm of TSMC 5nm in 32nm would therefore be around 660 sqmm. That's definitely on the large side, but yet again chips nearly that large were manufactured & sold in consumer products.

> A very large part of the die is not labelled and it must include some blocks that cannot be omitted from the CPU, e.g. the PCIe controller and various parts from the memory controller, e.g. buffers and prefetchers.

You need those things to have a functional modern SoC, but you don't need as much or of the same thing to have just a desktop CPU without an iGPU, nor to have just the raw compute performance of the M1. It'll be a heck of a lot harder to feed it on dual-channel DDR3, for sure, but all those canned benchmarks that fit in cache would still be just as fast.


> We've been "stuck" at the 3-5ghz range for something like half a decade

It’s closer to two decades, actually. Pentium 4 (Northwood) reached 3.06 GHz in 2002, using 130 nm fabrication process.


Jim Keller believes that at least 10-20 years of shrinking is possible [1].

[1] https://www.youtube.com/watch?v=Nb2tebYAaOA&t=1800


> I can't see 1nm being much beyond 2026-2028. What happens then?

whatever marketing people come up? Moore's law is not a law but an observation. it doesn't really matter tho. we are going to 3D chip, chiplet, advance packaging ...etc.


Quantum effects haven't been relevant for a while now. The "nanometer" numbers are marketing around different transistor topologies like FinFET and GAA (Gate-all-around). There's a published roadmap out to "0.7 eq nm). Note how the "measurements" all have quotes around them:

https://www.extremetech.com/computing/309889-tsmc-starts-dev...


Eventually, CPUs will have to focus on going wide, i.e. growing number of cores and improving interconnections.


moar coars


Feel this piece ducks one of the most important questions - what is the future and value of x86 to Intel? For a long time x86 was one half of the moat but it feels like that moat is close to crumbling.

Once that happens the value of the design part of the business will be much, much lower - especially if they have to compete with an on form AMD. Can they innovate their way out of this? Doesn't look entirely promising at the moment.


Why are people so hung up about the x86 thing? ARM continues to be sold on because everyone has now understood they don't really matter; they are not driving the innovations, they were simply the springboard for the Apples, Qualcomms and Amazons to drive their own processor designs, and they are not setup to profit from that. ARMs reference design isn't competitive, the M1 is.

Instruction set architecture at this point is a bikeshed debate, it's certainly not what is holding Intel back.


I'm not sure that's entirely true. According to this (see "Why can’t Intel and AMD add more instruction decoders?"):

https://debugger.medium.com/why-is-apples-m1-chip-so-fast-32...

..a big part of the reason the M1 is so fast is the large reorder buffer, which is enabled by the fact that arm instructions are all the same size, which makes parallel instruction decoding far easier. Because x86 instructions are variable length, the processor has to do some amount of work to even find out where the next instruction starts, and I can see how it would be difficult to do that work in parallel, especially compared to an architecture with a fixed instruction size.


Well, if we can have speculative execution, why not speculative decode? You could decode the stream as if the next instruction started at $CURRENT_PC+1, $CURRENT_PC+2, etc. When you know how many bytes the instruction at $CURRENT_PC takes, you could keep the right decode and throw the rest away.

Sure, it would mean multiple duplicate decoders, which eats up transistors. On the other hand, we've got to find something useful for all those transistors to do, and this looks useful...


According to the article I linked, that's basically how they do it:

"The brute force way Intel and AMD deal with this is by simply attempting to decode instructions at every possible starting point. That means x86 chips have to deal with lots of wrong guesses and mistakes which has to be discarded. This creates such a convoluted and complicated decoder stage that it is really hard to add more decoders. But for Apple, it is trivial in comparison to keep adding more.

In fact, adding more causes so many other problems that four decoders according to AMD itself is basically an upper limit for them."


That doesn't make any sense. The ROB is after instructions have been cracked into uops; the internal format and length of uops is "whatever is easiest for the design", since it's not visible to the outside world.

This argument does apply to the L1 cache, which sits before decode. (It does not apply to uop caches/L0 caches, but is related to them anyway, as they are most useful for CISCy designs, with instructions that decode in complicated ways into many uops.)


Maybe it wasn't clear, but the article I linked is saying that compared to M1, x86 architectures are decode-limited, because parallel decoding with variable-length instructions is tricky. Intel and AMD (again according to the linked article) have at most 4 decoders, while M1 has 8.

So yes the ROB is after decoding, but surely there's little point in having the ROB be larger than can be kept relatively full by the decoders.


Well intentioned as that article may be, it makes plenty of mistakes. For a rather glaring one, no, uops are not linked to OoO.

Secondly, it ignores the existence of uop caches that x86 designs use in order to not need such wide decoders. Some ARM designs also use uop caches, FWIW, since it can be more power efficient.

That doesn't mean that fixed width decoding like on aarch64 isn't an advantage; it certainly is. Also, M1 is certainly a very impressive design, though of course it also helps that it's fabbed on the latest and greatest process.


I would argue that ISA does matter. Beyond the decode width issue, x86 has some material warts compared to ARM64:

The x86 atomic operations are fundamentally expensive. ARM’s new LSE extensions are more flexible and can be faster. I don’t know how much this matters in practice, but there are certainly workloads for which it’s a big deal.

x86 cannot context-switch or handle interrupts efficiently. ARM64 can. This completely rules x86 out for some workloads.

ARM64 has TrustZone. x86 has SMM. One can debate the merits of TrustZone. SMM has no merits.

Finally, x86 is more than an ISA - it’s an ecosystem, and the x86 ecosystem is full of legacy baggage. If you want an Intel x86 solution, you basically have to also use Intel’s chipset, Intel’s firmware blobs, Intel’s SMM ecosystem, all of the platform garbage built around SMM, Intel’s legacy-on-top-of-legacy poorly secured SPI flash boot system, etc. This is tolerable if you are building a regular computer and can live with slow boot and with SMM. But for more embedded uses, it’s pretty bad. ARM64 has much less baggage. (Yes, Intel can fix this, but I don’t expect them to.)


> The x86 atomic operations are fundamentally expensive. ARM’s new LSE extensions are more flexible and can be faster. I don’t know how much this matters in practice, but there are certainly workloads for which it’s a big deal.

There's also the RcPc stuff in ARM 8.3 and 8.4 that could make acquire/release semantics cheaper.

> the x86 ecosystem is full of legacy baggage.

Luckily for ARM servers we have SBSA that adds things like UEFI and ACPI to the ARM platform. :)


That baggage means I can at least boot an Intel system without first building my own device tree, then hacking the kernel to actually make it work. Meanwhile over in ARM M1 land, they apparently managed to break wfi.


Well put. People are being their usual teamsport participants on x86 vs ARM. Intel has execution problems in two departments - manufacturing and integration. ISA is not an issue - they can very well solve the integration issues and investing in semiconductor manufacturing is the need of the hour for the US so I can imagine they getting some traction there with enough money and will.

IOW even if Intel switched ISA to ARM it won't magically fix any of the issues. We've had a lot of ARM vendors trying to do what Apple did for too long.


Intel / AMD had a duopoly on desktop / server because of x86 for a large number of years.

Loss of that duopoly - even with competitive manufacturing - has profound commercial implications for Intel. M1 and Graviton will be followed by others that will all erode Intel's business.


On the other hand if x86 keeps competitive there's lot of inertia in its favor. So it could go either way. Desktop especially has been a tough nut to crack for anyone other than Apple and they are only 8% of the market.


Probably more than 8% by value and with the hyperscalers looking at Arm that's a decent part of their business at risk - and that's ignoring what Nvidia, Qualcomm etc might do in the future.

Agreed that intertia is in their favour but it's not a great position to be in - it gives them a breathing space but not a long term competitive advantage.


The demise of x86 isn't something that can be fiated. It could come about, but there would need to be a very compelling reason to motivate the transition. Technologies that form basic business and infrastructural bedrock don't go away just because of one iteration -- look at Windows Vista for example.

Even if every PC and server chip manufacturer were to eradicate x86 from their product offerings tomorrow, you'd still have over a billion devices in use that run on x86.


It's not the demise of x86. It's the demise of x86 as a moat.

Those are different things. We have seen a minuscule movement on the first, but we've been running towards the second since the 90's, and looks like we are close now.


Windows Vista's problems were relatively easy to solve though. Driver issues naturally sorted themselves out over time, performance became less of an issue as computers got more powerful, and the annoyances with Vista's security model could be solved with some tweaking around the edges. There wasn't much incentive to jump from the Windows ecosystem, as there was no doubt that Microsoft could rectify these issues in the next release of Windows. Indeed, Windows 7 went on to be one the greatest Windows release ever, despite being nothing more than a tweaked version of the much maligned Vista.

Intel's problems are a lot more structural in nature. They lost mobile, they lost the Mac, and we could very well be in the early stages of them losing the server (to Graviton, etc...) and the mobile PC market (if ARM PC chips take off in response to M1). Intel needs to right the ship expeditiously, before ARM gets a foothold and the x86 moat is irreversibly compromised. Thus far, we've seen no indication that they know how to get out of this downward spiral.


Windows 7 was a _substantially fixed_ version of the much maligned Vista. Its fixes for memory usage, for example, were dramatic.


>look at Windows Vista for example.

This is a terrible example for the reasons stated in the article. Microsoft is already treating windows more and more like a step child everyday - office and azure are the new cool kids


I was extremely careful not to say that x86 would go away!

But it doesn't have to for Intel to feel the ill effects. There just have to be viable alternatives that drive down the price of their x86 offerings.


I wasn't trying to refute your comment, nor to imply that you said x86 is on its way out the door, but we are talking about the future of x86 after all.

Intel has already driven its prices downward aggressively [0]. That seems to be part of their strategy to contain AMD while they get their own business firing on all cylinders again, and it's going to be true regardless of whether the majority of the market demands x86 or not. The more that Intel can pressure AMD's gross margin, the more relevant Intel's synergies from being an IDM can become.

[0] https://www.barrons.com/articles/intel-is-starting-a-price-w...


It's worth saying that CPU design isn't like software. Intel and AMD cores are fairly different, and the ISA is the only thing that unites them.

If X86 finally goes, and Intel and AMD both switched elsewhere we'd be seeing the same battle as usual but in different clothes.

On top of the raw uarch design, there is also the peripherals and ram standard etc. etc.


Fair points, but if you're saying that if we moved to a non-x86 (and presumably Arm based) world then its business as usual for Intel and AMD then I'd strongly disagree - it's a very different (and much less profitable) commercial environment with lots more competition.


The likelihood of Intel moving to ARM is probably nil. They have enough software to drag whatever ISA they choose with them, whereas AMD bringing up an ARM core could be fairly herculean as they have to convince their customers to not only buy their new chip but also trust AMD with a bunch of either missing or brand new software.


The days when Intel could single handedly successfully introduce a new (incompatible) ISA are long gone (if it ever could). I expect they will stick with x86 for as long as possible.


> The days when Intel could single handedly successfully introduce a new (incompatible) ISA are long gone (if it ever could).

Given itanium, I'd say they never could (although that could have been a fluke of that specific design)


Itanium underdelivered on performance both in its native mode and in x86 emulation mode. Either of them could have tanked that design by themselves, but both applied.


Indeed and that was with HP.

Look long enough back and you have iAPX 432!


There was also the three way ISA battle at Intel: 486 vs 860 vs 960. In the end they decided that legacy software was too valuable and redefined the 860 as a graphics co-processor and the 960 as a intelligent DMA to keep people from building Unix computers with them


AMD has already done an ARM 8 core chip. Then abandoned it.

ISA changes require a long term investment and building up an ecosystem. Which were out of scope for AMD at the time.

I think the market has changed somewhat, and they don't have to do all the heavy lifting. Would be interesting to see what happens there.


They still have an architecture license I think.

Given that x86 still has an advantage on servers makes sense for them to push that for then time being. When the Arm ecosystem is fully established I can't imagine it would be that hard to introduce a new Arm CPU using the innovation they've brought to x86 (chiplets etc).


I agree that the moat is falling away. There used to be things like TLS running faster because there was optimized x86 ASM in that path, but none for other architectures. That's no longer true.

I suppose Microsoft would be influential here. Native Arm64 MS Office, for example.


My view is that currently the only way for Intel to salvage themselves it to go ARM route and start licensing x86 IP and perhaps even open source some bits of tech. They are unable to sustain this tech by themselves nor with AMD anymore. It seems to me when Apple releases their new CPUs I am going to have to move to that platform in order to keep up with the competition (quicker the core, I can do calculations quicker and quicker deliver the product). Currently I am on AMD, but it is only marginally faster than M1 it seems.


Are they able to even do that legally? I'm pretty sure the licensing agreement for x86 with AMD explicitly prohibited this for both parties.


Contracts and agreements can usually be dissolved if all parties agree to do so. Intel and AMD could also continue to don't change anything and basically play chicken until they both run into the ground.


M1 has a more advanced node compared to both Intel and AMD designs. Architecture goes a long way of course.


I worked at Intel in 2012 and 2013. Back then, we had a swag t-shirt that said "I've got x86 problems but ARM aint one".

I went and dug that shirt out of a box and had a good laugh when Apple dropped the M1 macs.

Back then, the company was confident that they could make the transition to EUV lithography and had marketing roadmaps out to 5nm...


Perhaps now the "Why do people obsess over manufacturing?" question that many tech workers ask when other US industries were decimated will become a bit less quizzical.


It was more economics and political policy wonks, economists and politicians in general, who didn't just ask that question, but thrust forth the prescriptive rhetoric through a large grid of trade agreements, "Globalism is here to stay! Get over it! Comparative Advantage!" This is only the first in a long, expensive series of lessons these people will be taught by reality in the coming decades. I'm guessing this kind of manufacturing loss will cost at least $1T to catch up and take the lead again after all is said and done, assuming the US can even take the lead.

US organization, economic, and financial management at the macro scale is going through a kind of "architecture astronaut" multi-decade phase with financialization propping up abstracted processes of how to lead massive organizations as big blocks on diagrams instead of highly fractal, constantly shifting networks of ideas and stories repeatedly coalescing around people, processes, and resources into focused discrete action in continguous and continuous OODA feedback loops absorbing and learning mistakes along the way. Ideally, the expensive BA and INTC lessons drive home the urgent need for an evolution in organizational management.

I wryly think how similar the national comparative advantage argument looks to much young adult science fiction portrayal of space opera galactic empire settings with entire worlds dedicated solely to one purpose. This world only produces agricultural goods. That world only provides academician services. It is a very human desire to simplify complex fractal realities, and effective modeling is one of our species' advantages, but at certain scales of size, agility and complexity it breaks down. We know this well in the software world; some problems are intrinsically hard and complex, and there is a baseline level of complexity the software must model to successfully assist with the problem space. Simplifying further past that point deteriorates the delivery.


The US decided that it didn't like all the political disorder that came with managing a large proletariat. Instead, it decided to outsource the management of that proletariat to Asia and the 'global south.' Our proletariat instead was mostly liquidated and shifted itself into the service industry (not that amenable to labor organization) and the welfare/workfare rolls.

There are so many things that the US would have to reform to become more competitive again, but we are so invested into the FIRE economy that it's not unlike the position of the southern states before the Civil War: they were completely invested into the infrastructure of slavery and could not contemplate an alternative economic system because of that. The US is wedded to an economy based on FIRE and Intellectual Property production, with the rest of the economy just in a support role.

I'm not really a pro-organized-labor person, but I think that as a matter of national security we have to figure out a way to reform and compromise to get to the point to which we develop industry even if it is redundant due to globalization. The left needs to compromise on environmental protection, the rich need to compromise on NIMBYism, and the right needs to compromise on labor relations. Unfortunately none of this is on the table even as a point of discussion. Our politics is almost entirely consumed by insane gibberish babbling.

This became very clear when COVID hit and there was no realistic prospect of spinning up significant industrial capacity to make in-demand goods like masks and filters. In the future, hostile countries will challenge and overtake the US in IP production (which is quite nebulous and based on legal control of markets anyway) and in finance as well. The US will be in a very weak negotiating position at that point.


An IP based economy just on its face seems like such a laughable house of cards. So your economy is based on government enforced imaginary rights to ideas? The proliferation of tax havens should be a sign that the system is bullshit - it exposes how little is actually keeping the profits of IP endeavors within a nation.

There is incredibly little respect for the society owning the means of production in a tangible real sense, instead we have economies that run on intangibles, where the intangibles allow 600lb gorillas like Oracle to engage in much rent seeking while simultaneously avoiding paying dues to the precise body that granted them their imaginary rights. The entire status quo feels like something some rich tycoons dreamed up to sell to the public the merits of systematically weakening their negotiating position on the promise that one day a Bernie Sanders type would descend from the heavens and deliver universal basic income fueled by the efficiency of private industry through nothing but incorruptability and force of personality.

China seems to be successful in part because they have no qualms with flexing dictatorial power to increase the leverage of the state itself. This may be less economically efficient but it means they actually get to harvest the fruits of any efficiency. Intellectual property law? They just ignore it and don't get punished, since punishing them would be anti-trade.


Yes, the IP economy rests on a bunch of fragile international treaties the US has with its partner states. The government provides the court system that enforces IP claims, but the costs of litigation are mostly carried by rights holders. So when you are sued for patent infringement, the court's costs are fairly minimal and paid by both sides -- but the court's power is just an externality of state power.


> financialization propping up abstracted processes of how to lead massive organizations as big blocks on diagrams instead of highly fractal, constantly shifting networks of ideas and stories repeatedly coalescing around people, processes, and resources into focused discrete action in continguous and continuous OODA feedback loops absorbing and learning mistakes along the way.

I had trouble reading this without falling into the cadence of Howl! by Allen Ginsberg.


Thanks for introducing me to that, it was enjoyable to listen to Allen.

[1] https://www.youtube.com/watch?v=MVGoY9gom50


My pleasure :)

But to my chagrin, I just realized that the reading cadence I had in mind wasn't Allen Ginsberg's, but instead Jack Kerouac's. [0]

[0] https://youtu.be/3LLpNKo09Xk?t=197


Ha, that Beat Generation background music really sets the tone.


Are you referring to the music that's (mostly?) Steve Allen's piano playing? Yeah, totally.

I also never knew that Steve Allen was so good at piano. Before watching that video, all I knew about him was that he'd had a popular TV show.


This was very nicely expressed, I would read a book in this style!

By the way, I'm not sure the hnchat.com service linked in your profile works any more?


Thanks. I'm not like that IRL face to face except for a very small subset of close friends. If you like that kind of high information rate, discursive exploration, then you'd probably like or are already hanging around the folks at dredmorbius' various lairs, what used to be slatestarcodex, and gwern.net as jumping off points. I still hold wan hope the Net will lead to the kind of introspection writing tends to encourage. Thanks for letting me know hnchat is gone. I moved to protonmail.ch and updated my HN profile appropriately.


It only becomes less quizzical when it hits home -- that is when one's own job is on the line.


> And in that planning the fact that TSMC’s foundries — and Samsung’s — are within easy reach of Chinese missiles is a major issue.

Are processor fabs analagous to auto factories and shipyards in World War II? Is the United States military's plan for a nuclear exchange with China dependent on a steady supply of cutting edge semiconductors? Even if it is, is that strategy really going to help?

This article is mostly concerened with Intel's stock price. Why bring this into it? Let's say Intel gets its mojo back and is producing cutting edge silicon at a level to compete with TSMC and supplying the Pentagon with all sorts of goodies... and then China nukes Taiwan? And now we cash in our Intel options just in time to see the flash and be projected as ash particles on a brick wall?

"The U.S. needs cutting edge fabs on U.S. soil" is true only if you believe the falied assumptions of the blue team during the Millenium Challenge, that electronic superiority is directly related to battlefield superiority. If semiconductors are the key to winning a war, why hasn't the U.S. won one lately?

And what does any of this have to do with Intel? Why are we dreaming up Dr. Strangelove scenarios? Is it just that some people are only comfortable with Keynesian stimulus if it's in the context of war procurement?


I don't feel that there is a meaningful TSMC alternative today. Samsung, Intel and GlobalFoundries are not suitable replacements for TSMC with regards to throughput or technology.

The world does need some meaningful fabs outside of Taiwan/South Korea. All of the <10nm semiconductor and most of the >10nm semiconductor fabrication takes place within a 750km/460mile radius circle today. That is risky.

Israel, Mexico, Germany, Canada, Japan (not that it would grow the circle much...) are all viable places to run a foundry. The fact that Intel is one of the few outside that circle doesn't inspire confidence in the security of the global supply chain.


One of the key misconceptions that I see repeated in the comments for this article is that lithography with sub-10nm feature size is somehow universally appropriate and preferable. This may be true for high-performance computing or other applications that are sensitive to the ratio of compute to price (or mobile consumer devices with a small thermal envelope), but it's not necessarily true for power electronics, automotive ICs, or missile control systems. Some of those chips aren't even made of silicon, instead being made of more expensive gallium nitride or gallium arsenide because of their thermal, high-frequency, radiation, and voltage properties.


I mostly agree that there isn't a great alternative to TSMC, but I would point out that the 2021 Qualcomm Snapdragon 888 processors are being made by Samsung with their 5nm process (in addition to their new Exynos 2100). Intel and GlobalFoundries aren't really replacements, but Samsung has been winning business for latest-generation flagship processors. Maybe it isn't as advanced as TSMC and maybe Samsung will have problems, but a lot of the 2021 flagship phones will be shipping with Samsung-manufactured 5nm processors.

Samsung seems to be keeping it close.


It's still within that circle. Samsung is a great fab, probably the only real contender to TSMC. Samsung is 17% global semiconductor demand vs TSMC at 50%+. Included in that 17% is all of Samsung's demand (Exynos, SSD/storage, memory, etc).

Further agitating the issue, South-Korean SK-Hynix is buying Intel's nand business this year and will likely shift production out of intel's US fabs when it comes time.



> If semiconductors are the key to winning a war, why hasn't the U.S. won one lately?

Semiconductors aren’t going to help change people’s cultures or religion or tribal affiliations without decades of heavy investment in education and infrastructure, or other large scale wealth transfers.

But if “winning a war” means killing the opposing members while minimizing your own losses, surely electronic superiority will help.


This was a well-written article, but I don't think it came from someone with a deep understanding of semiconductor technology and fabrication.

Intel hasn't lost to Apple and AMD because they employ idiots, or because of their shitty company culture (in fact, they're doing surprisingly well in spite of their awful company culture). Intel lost because they made the wrong bet on the wrong type of process technology. 10 years ago (or thereabouts), Intel's engineers were certain that they had the correct type of process technology outlined to successfully migrate down from 22nm to 14nm, then down to 10nm and eventually 7, 5, and 3nm. They were betting on future advances in physics, chemistry, and semiconductor processes. Advances that didn't materialize.

EUV turned out to be the best way to make a wafer at lower transistor size.

So now Intel's playing catch up. Their 10nm process is still error-prone and far from stable. There are no high-performance 10nm desktop or server chips.

That's not going to continue forever though. Even on 14nm, Intel chips, while not as fast as Apple's M1 or AMD's Ryzen 5000 series, are still competitive in many areas. Intel's 14nm chips are over 6 years old. The first was Broadwell in October 2014. What do you think will happen when Intel solves the engineering problems on 10nm, and then 7nm? And then 5nm?

It took AMD 5 years to become competitive with Intel, and over 5 to actually surpass them.

If you think the M1 and 5950X are fast, then wait till we have an i9-14900K on 5nm. It'll make these offers look quaint by comparison.

EDIT: I say this as a total AMD fanboy by the way, who bought a 3900X and RX 5700 XT at MicroCenter on 7/7/2019 and stood in line for almost five hours to get them, and as someone who now has a Threadripper 3990X workstation. I love AMD for what they've done... they took us out of the quad-core paradigm and brought us into the octa-core paradigm of x86 computing.

But I am under no illusions that they're technically superior to Intel. Their process is what allows them to outperform Intel, not their design. I guarantee you that if Intel could mass produce their CPUs on their 7nm process (which is far, far more transistor dense than TSMC's 7nm), AMD would be 15-25% behind on performance.

It isn't so much that AMD is succeeding because they're technically superior... they're succeeding because Zen's design team made the right bet and because Intel's engineering process team made the wrong bet.


I agree with most of what you say, except that AMD was also technically superior during these last years.

Intel certainly has the potential of being technically superior to AMD, but they do not appear to have focused on the right things in their roadmaps for CPU evolution.

Many years before the launches of Ice Lake and Tiger Lake, the enthusiastic presentations of Intel about the future claimed that these will bring some sort of marvelous improvements in microarchitecture, but the reality has proven to be much more modest.

While from Skylake in 2015 to Ice Lake in 2019 there was a decent increase in IPC, it was still much less than expected after so many years. While they were waiting for a manufacturing process, they should have redesigned their CPU cores to get something better than this.

Moreover the enhancements in Ice Lake and Tiger Lake seem somewhat unbalanced and random, there is no grand plan that can be discerned about how to improve a CPU.

On the other hand the evolution of Zen cores was perfect, every time the AMD team seems to have been able to add precisely all improvements that could give a maximum performance increase with a minimum implementation effort.

Thus they were able to pass from Zen 1 (2017) with an IPC similar to Intel Broadwell (2014), to Zen 2 (2019) with an IPC a little higher than Intel Skylake (2015) and eventually to Zen 3 (2020) with an IPC a little higher than Intel Tiger Lake (2020).

So even if the main advantage of AMD remains the superior CMOS technology they use from TSMC, just due to the competence of their design teams, they have passed from being 3 years behind Intel in IPC in 2017, to being ahead of Intel in IPC in 2020.

If that is not technical superiority, I do not know what is.

Like I have said, I believe that Intel could have done much better than that, but they seem to have done some sort of a random walk, instead of a directed run, like AMD.


Interesting perspective. From a paradoxical perspective, I actually want Intel to stay relevant.

I think that everyone will take advantage of the migration to ARM to push more lock in, despite the supposedly open ARM architecture.

A sort of poison pill: "you get more performance and better battery life, but you can't install apps of type A, B and C and those apps can only do X, Y and Z".


Agreed. I’m not super knowledgeable but AMD just caught up after years of Intel stumbling to get to 10nm. Intel’s 14nm chips are still very competitive with AMD, so it seems that Intel on 5nm should beat AMD on 5nm, for example.

Intel will either solve its process issues in its own factories or, worst case, they outsource production to TSMC - either of which eliminates any process/manufacturing advantage held by AMD and Apple.

On a 5-10 year timeline, I don’t see a reason for Intel to continue stumbling on 5nm and 3nm processes though.


A major incentive for the US government to get involved is touched on. Not only is Taiwan 'just off the coast' from China, China is coming for it and intends to assimilate Taiwan back into China just as Hong Kong and Macau have been.

At that point, the only sustainable leverage the rest of the world would have in chip technology would be ASML.


It is thought provoking to think that the USA's interest in Taiwan is more about protecting TMSC than protecting a democratic state in East Asia. By this line of thinking, building capacity in Arizona, or anywhere outside Taiwan, is good for TMSC and for the USA but weakens Taiwan.


> It is thought provoking to think that the USA's interest in Taiwan is more about protecting TMSC than protecting a democratic state in East Asia.

Why it though provoking? It's always realpolitik. All wars are.

There's always a pretext but the subtext is what actually causes wars.


Taiwan is also stategically important because it guards access from the Pacific to China and back. To the north there is Japan and South Korea (US allies), and to the south there are Vietnam and the Philippines, who already feel the pressure of the expansion of China's influence (Chinese military bases in the South China sea), and are thus likely to side with the US in the future. Right now, China is sorta contained, but if the CCP can reassert itself on Taiwan, the situation will flip and the CCP will have full control about access to the mainland from the sea, and can freely access the Pacific.


Contrary to the article, AMD is not yet shipping 5nm in volume. (Rumors point to Zen 4 later in 2021.)

Additionally, Intel works with ASML and other similar suppliers. Intel even owns a chunk of ASML.


I looked up Canon's and Nikon's lithography numbers yesterday and was shocked to see that they barely make any money off their lithography businesses, considering that both companies make DUV machinery. Although they don't have the street cred of ASML, they are important because (A) there's a shortage, and (B) the demand side of the machinery market needs to foster competition in order to keep ASML from gouging its customers.

To go even further than your comment (with which I agree, 5nm isn't the center of AMD's income right now), TSMC isn't even making most of its wafer revenue from 5nm and 7nm. Straight from the horse's mouth (Wendell Huang, CFO):

"Now, let's move on to the revenue by technology. 5-nanometer process technology contributed 20% of wafer revenue in the fourth quarter, while 7-nanometer and 16-nanometer contributed 29% and 13%, respectively. Advanced technologies, which are defined as 16-nanometer and below, accounted for 62% of wafer revenue. On a full-year basis, 5-nanometer revenue contribution came in at 8% of 2020 wafer revenue. 7-nanometer was 33% and 16-nanometer was 17%."

https://www.fool.com/earnings/call-transcripts/2021/01/14/ta...


they'll still be depreciating the newer fabs.


Sure, but my understanding is that (assuming that you have a choice about how much depreciation expense to write down) from a tax perspective that's what you're supposed to do when your business is making money. It's also a form of P&L smoothing.


Every time a stratechery article is posted here I wonder how long it'll take before they reference a past article. It was the first sentence.


I wonder how the move from x86 to arm is going to affect desktop apps. With the move to ARM apple is already pushing it's iOS apps into MacOS. Once it becomes commonplace on Windows, it would be super easy to run Android Apps on Windows via simulation(rather than emulation which is much slower).

Given that mobile apps are more lightweight and consume far less resources than their electron counterparts, would people prefer to use those instead? Especially if their UIs were updated to support larger desktop screens.


Why do you think mobile apps are more lightweight?

Android phones these days have at least 4GB or RAM and mobile apps are in general more limited plus you run fewer of them in parallel as they tend to be offloaded from RAM once the limit is reached.


> Solution One: Breakup

> Solution Two: Subsidies

Solution Three: lower prices/margins (temporarily) to match the value proposition of AMD on Windows PCs and Linux Cloud servers.


Intel needs to change the way it does business. Simply lowering prices won't achieve that. Becoming the cheap option is likely the beginning of a death spiral the company will never recover from -- it will give the company an excuse to double down on a failing strategy.

Furthermore, AMD is not the biggest threat to Intel. The biggest threat is cloud providers like Amazon designing their own chips, which is already happening. If those succeed, who would build them? Certainly not Intel, if they continue to manufacture only their own designs -- that business, like so much other fab business, will go to TSMC.


> Becoming the cheap option is likely the beginning of a death spiral...

Maybe. I didn't suggest becoming the cheap option I suggested re-evaluating its premium pricing strategy in the short-term to reflect current and future customer value. Margin stickiness seems to be a built-in bias similar to the sunk-costs fallacy.

Server-side Neoverse is a threat but a slow-moving one. I'm assuming that "Breakup" (going fabless) will not show benefits for many months if not years. Price seems like an obvious lever; perhaps I'm being naive about pricing but it's not obvious to me why.


Solution Four: Is it even possible that Intel and AMD merge ?! With ARM based chips clearly accelerating and poised to take a big market share (Apple, Nvidia, Amazon, Qualcomm becoming major players) there is less of an antitrust issue ?


It is more likely they will stick to their guns like IBM sticking with Power (which is technically awesome) but still pricing it too high (because cost economics is likely out of whack) and in the process they will lose developer mindshare.

I really hope Intel does better than IBM with Power.


If individual companies developing their own chips is a trend, and it sure seem like it is starting to, intel has a lot more competition they have to contend with. Before is always a buy, now add build into the equation. That’s where intels problems are coming. That’s a lot of head winds, they can capture that by splitting and going the TSMC route, and specialize further on the design and use some form of licensing model like ARM.

This is like the Microsoft pivot into cloud to save itself.


The funny thing is, in the time period being addressed first in the article (2013) Intel was better at mobile than it is now. Its Bay Trail and Cherry Trail chips had more performance per dollar than even today's offerings, eight years later. Intel just decided low-margin wasn't a concept in which they were interested.


When I learned that allegedly Intel and nVidia were fixing the laptop market, I just hope that this company goes down or goes through a substantial transformation. Their current management situation is untenable. If I was a shareholder (fortunately I am no longer), I would pressure them to sack everyone involved.


C'mon Intel, this is your opportunity to go all in on RISC-V


why?


They're not losing money fast enough on x86?


They are not loosing money....


Totally reasonable argument, and I think most would be better off with an independent, US-based foundry.

Unfortunately, I doubt that the US government functions well enough at this point to recognize the threat and overcome the influence Intel's money would wield against the effort.


I thought GlobalFoundries was US based, and they have a fab in Vermont (and Germany and Singapore)


Yes, but GlobalFoundries have given up development on leading edge process nodes.


It's a US company but it's owned by the Emirate of Abu Dhabi.


A question for those who've been around in tech longer: was Google really the first and "disruptive" user of x86 commodity hardware in datacenters that everyone else then lagged behind? Or was it just a general wave and shift in the landscape?


> a federal subsidy program should operate as a purchase guarantee: the U.S. will buy A amount of U.S.-produced 5nm processors for B price; C amount of U.S. produced 3nm processors for D price; E amount of U.S. produced 2nm processors for F price; etc.

I really like this concept, though I’d advocate for a straight subsidy (sales of American-made chips to a U.S.-registered and based buyer get $ credit, paid directly to the supplier and buyer, on proof of sale and proof of purchase) given the logistical issues of the U.S. government having a stockpile of cutting-edge chips it can’t dump on the market.


If intel made a SBC or SOC design for low power applications, I'd consider it if they had long term support. Intel used to power all of our edge needs in POS and Security, now I see that slipping as well.


Slightly more aggressive take: fully automated contract manufacturing is the future, those that resist its march will be trampled and those that ignore it will be left behind.

Semiconductor manufacturing is just one example where this is happening, electronics is another. Maybe one day Toyota Auto fabs will be making Teslas.


Companies that use millions of micros will grow tired of paying royalties for ARM & other IP. I'm putting my money on RISC-V. If Intel is smart, they will too and offer design customization and contract manufacturing of RISC-V.


Gosh, splitting (if not anti-trust at least pro-competition) then subsidies sounds like way too sane government planning for the US to actually do it.


They could make a strategic investment in reconfigurable computing, and pivot around this, if they can survive long enough to profit from it.


Solution three : Invest to make x86 to become power efficient to reach at par or better than arm counter part, while outsourcing manufacturing to TMSC to fill the gap. Reach to the level Future Apple M chip will achieve. At the same time, start building bare metal cloud hosting solutions that allow other companies to provide their own cloud solutions, ( and using energy efficiency at your advantage ), also use that energy efficiency to create mobile platform that can enable Mozilla and Ubuntu like provider to make operating syatem on.


What was "the memory crash of 1984"?


DRAM at one point in time accounted for over 90% of Intel’s sales revenue. The article states that DRAM was essentially the “technology driver” on which Intel’s learning curve depended. Over time the DRAM business matured as Japanese companies were able to involve equipment suppliers in the continuous improvement of the manufacturing process in each successive DRAM generation. Consequentially, top Japanese producers were able to reach production yields that were up to 40% higher than top U.S. companies. DRAMs essentially became a commodity product.

Intel tried to maintain a competitive advantage and introduced several innovative technology design efforts with its next generation DRAM offerings. These products did not provide enough competitive advantage, thus the company lost its strategic position in the DRAM market over time. Intel declined from an 82.9% market share in 1974 to a paltry 1.3% share in 1984.

Intel’s serendipitous and fortuitous entry into microprocessors happened when Busicom, a Japanese calculator company, contacted Intel for the development of a new chipset. Intel developed the microprocessor but the design was owned by Busicom. Legendary Intel employee Ted Hoff had the foresight to lobby top management to buy back the design for uses in non calculator devices. The microprocessor became an important source of sales revenue for Intel, eventually displacing DRAMs as the number one business.

https://anthonysmoak.com/2016/03/27/andy-grove-and-intels-mo...


It's time for more predictions.

1. Apple's CPUs will not improve anywhere near as fast as the competition. Computation per watt of (some) competitors' products will outpace Apple's in just a few years.

2. Intel will come roaring back on the back of TSMC, but first will need to wait on growth of manufacturing capacity, as certain competitors can get more money per mm^2.

3. Intel will fail to address its product-quality problem, but it will not end up hurting them.


Though provoking, to be sure, but the problem with his solution of building up Intel's manufacturing arm through spinoff and subsidy is that we simply don't have the labor force to support it, and with much more controlled immigration in the future, it will take decades to build up the engineering education needed to make the US compete with Taiwan, South Korea, and of course China.


A solution to yesterday's problems shouldn't discount tomorrow's innovations. I don't think iOS and Android are in the best long-term position. There's more things happening in our global infrastructure that should be accounted for. Internet is priming for a potential reverse/re- evolution of itself. (5G is a large factor for this).


The problem was created when they lost focus on energy efficiency. Rest is just after effect.


How does RISC factor into all of this? China is working on its own chips after Trump poked the bear. It wouldn't surprise me if, seemingly out of nowhere, they suddenly started promoting and selling RISC chips and compatible motherboards at competitive prices.


This article seems to mix up AMD and ARM


If TSMC is going to be a monopolist fab for x86, then they will ultimately suck all the profits out of the server/desktop markets. This isn't just kinda bad news for Intel/AMD, it's really bad news.


Well I got down to the part where the author said that AMD never threatened Intel in the data center market and I closed the tab. AMD won entire generations of data center orders while Intel was flailing with Itanium and NetBurst.


Speaking of Itanium, if the x86 dam has truly burst, I'd much rather see something more like the Itanium than RISC-V. Something new.

It's a shame the Mill is so secretive, actually, they're design is rather nice.


One of RISC-V's main goals is to be boring and extensible. Think if it as the control-plane core, or the EFI for a larger system. You would take RISC-V and use it drive your novel VLIW processor.


How? RISC-V will have to have memory model, for example, which will define some at least effective execution model. If you turn RISC-V into not RISC-V you might as well just start from scratch.


Pretty sure you can't take RISC-V and use it to drive a Mill.


Nah I think the Itanic concept is dead in the water

VLIW works (especially in the way it was done in Itanium - IIRC) when either your workload is too predictable or maybe if your compiler manages to be one order of magnitude smarter than it is today (even with llvm, etc)

It seems even M1 prefers to reorder scalar operations than work with SIMD ops in some cases (this is one of its processors)


Itanium is dead but VLIW as a concept is still interesting to me.

If you look at uops executed per port benchmarks you can see that CPUs are far from all seeing eyes.


AMD and Nvidia both used VLIW in the past and both moved away because they couldn't get it to run efficiently. If embarrassingly parallel problems can't execute efficiently on VLIW architectures, I somehow doubt that CPUs will either.

The final versions of Itanic started adopting all the branch predictors and trappings from more traditional chips.

The problem is that loops theoretically cannot be completely predicted at compile time (the halting problem). Modern OoO CPUs are basically hardware JITs that change execution paths and patterns on the fly based on previous behavior. This (at least at present) seems to get much better data resulting in much better real-world performance compared to what the compiler sees.


Mill is claimed to run general purpose code well, unlike Itanic and VLIW in general. Are you claiming Mill would be like Itanium?



GCP only offer Epyc CPUs in some regions. None of those regions are ones we use! Gah!

Can someone update us on where AWS offer them, if at all?


If you think about how these providers deploy a cloud facility, it makes sense that the offerings in a given place are relatively static. The whole network design, thermal/mechanical design, and floor plan is built with certain assumptions and they can't just go in and rack up some new machines. It evolves pretty slowly and when a facility gets a new machine it is because they refresh the whole thing, or a large subset of it.

That said, the EPYC machine type is available in 12 zones of four different regions in the US, which isn't bad.


Usually you would have some number of enclosed aisles of racks make up a deployment pod.

You can usually customize machine configuration within a deployment pod while staying within the electrical and thermal envelope of the aisle and without changing the number of core-spine to pod-spine network links.

You could potentially build out a data hall but not fully fill it with aisles. As demand starts to trend up you can forecast two quarters into future and do the build-outs with just one quarter lead time.

I would expect very large operators to have perfected this supply chain song and dance very well.


They have perfected it, just not in the manner that you are suggesting.


A company that is being cannibalized by companies ripping the rug under them by developing their own chips, yet Intel makes no efforts to return the favor. They need to make an open source OS phone, maybe that will make a dent and serve as a carrier for the chips. They dont need to do all the work they can partner.


Lets speed run a doomsday:

2022: share price tanks, ceo booted, they shuffle but dont have a plan, no longer blue chip so finance is hard to come by. delisting. everyone booted. doors close.

2023/4: AMD only game in town. profits and volumes up. so are the faults and vulnerabilities. They spend most of their effort in fixes and not innovation.

2024: M1 chip available on dells/hps/thinkpads. AWS only use Graviton unless customer specifically buys another chip.

2025: Desktop ARM chip available on dells/hps/thinkpads. 2025: AWS makes a 'compile-to-anything' service. decompiler and recompiler on demand.

2026: AMD still suffering. Hires Jim Keller for the 20th time. makes a new ZEN generation that beats M1 and Arm. AMD goes into mobile CPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: