Hacker News new | past | comments | ask | show | jobs | submit login
Intel delays 10nm Cannon Lake processors, again, until late 2019 (theinquirer.net)
153 points by redial on July 27, 2018 | hide | past | favorite | 84 comments



It's fun to dig up old predictions of where we'd be today. Here's an article from 2002: https://www.eetimes.com/document.asp?doc_id=1176510

>Moore's Law scales to at least 9nm: technologist

>It'll be at least a human generation before Moore's Law begins to run out of gas at around the 9nm and even then it may thrive, TSMC's chief technologist said Tuesday (March 12). Calvin Chenming Hu told an audience at the annual Semico Summit conference here that the 9nm node "can be ready more or less on time, in 2028 according to long-term forecasts or 2024 according to the 2002 (industry roadmap)."

Intel is off by 4-5 years according to their initial estimates (in 2011) of hitting 10nm, but they are 5-10 years ahead of where TSMC thought we would be 15+ years ago. TSMC was manufacturing at 10nm last year, incidentally (Qualcomm's Snapdragon 835).


As far as I hear from industry specialists (but don't know first hand), the 'node' terms have become little more than marketing. I believe in 2002 it was still a useful metric of feature size, hence why 9nm would be still ahead.


Bad predictions go in both directions. In 2000, Intel was saying that within five to ten years CPUs would run at 10 GHz and be made with EUV lithography. 18 years later and 10 GHz never came close to happening. EUV didn't happen either, though we may finally be close.

https://www.theregister.co.uk/2000/12/11/intel_plans_1500_10...

https://www.anandtech.com/show/680/6


Isn't comparing TSMC to Intel process nodes like comparing apples to oranges?


Yeah they aren't quite comparable; I didn't want to dig into the complexities of that, which is why I followed the last thought with "incidentally".


It's more one kind of apple to another.

They're both process nodes, with certain minor-ish differences here and there. TSMC calls theirs 7 nm, Intel calls theirs 10 nm.

Their performance characteristics might be around the same. In practice... who knows. We do know that Intel's current 10 nm CPUs are pretty much broken ~ best they could currently do was useless laptop-grade 2.5 GHz dual-core with no functional iGPU. Not sure about the HT, though.


I really hope they don't give up on making discrete GPUs again. That internal program and been brought back up and killed a few times.

I use to own an i740 way back in the day, and my cousin even worked on that video driver. With nVidia's buy out of 3Dfx and PowerVR pretty much only doing mobile phones/tablets, it'd be nice to see a 3rd player in the nVidia/AMD space again.

I know Matrox still exists, but for what I can tell, they just make cards for 10~20 monitor setups (kiosks, stores, airports, etc).


Interesting... i740 drivers was very broken. I remember having a lot of issue with many games.


It might be, but if AMD/TSMC start cranking out their 7nm stuff next year, I've read that marketing can't bridge that gap: the 7nm people will have the leading processing technology in the world, and Intel will be trailing for the first time in a long, long time.

But who can predict what will happen...


They both manufacture silicon gates.. what does it matter if the production infrastructure on top is different.


the way they measure size is different


Ah. What confused me was the word “process” which typically refers to the stack of things that take a high level description of a circuit and turn it into instructions for the machine to execute to make the wafer.


The engineering involved in shrinking the gates these days is Herculean enough that Intel's 10nm and 7nm processes are genuinely distinct processes in that sense as well.


In semiconductor fabrication “process” almost always refers to the manufacturing process, not the design process.


>Intel is off by 4-5 years according to their initial estimates (in 2011) of hitting 10nm

In 2011 they were expecting 10nm in 2016. So it was not really off by 4 - 5 years. More like 3 - 4 years.

>TSMC's chief technologist said Tuesday (March 12).

Somewhere along the line TSMC's node naming no longer follows Intel's node naming. So that 9nm they mention is more like today's TSMC 5nm. So if you look at 2024 for TSMC's 5nm it isn't too far off. TSMC is planning to have 5nm in 2020. The 4 years speed up has been from heavy investment of Mobile SoC and industry scales changes that were not foreseen at the time.


Note that the effective gate length on an Intel "10nm" finfet is 18nm.


There are even rumors that Intel delayed the much more important Xeons to 2020, which could give AMD a 1 year head start with Zen2 on 7nm in the server space (comparable to Intels 10 nm).

Copy paste from my posting yesterday:

EDIT: it seems the twitter post got deleted.

EDIT2: anandtech still has the pictures: https://www.anandtech.com/show/13119/intels-xeon-scalable-ro...

> If the rumor [0] that Intel's first 10nm server chip will only release mid 2020 is true, then AMD's shares will probably skyrocket again if they truly can release their Zen 2 server CPU mid 2019 (on 7nm which is comparable to Intel's 10nm).

[0] https://twitter.com/david_schor/status/1022142835989118977


It used to be that Intel was undisputed king of the silicon race. It's kind of shocking that they've fallen so far behind.

TSMC is already doing 7nm mass production right now:

https://www.digitimes.com/news/a20180622PD204.html

(arguably, what they call "7nm" isn't quite that, but still...)


Every single time an article comes up about this someone says this, and every single time someone refutes this saying that 7nm is a lie. Intel 10nm is supposed to be more dense than TSMC 7nm (if they ever get there).


This fact keeps getting more and more exaggerated as time goes on.

Yes, intel's 10nm absolutely smashes Samsung, GF and TSMC's 10nm processes. Yes Intel's 10nm should be very competitive/comparable with Samsung/GF/TSMC's 7nm processes. But all of the Intel 10nm metrics are slightly worse than the 7nm processes its now competing with.

To cherrypick one metric as an example -

  * TSMC 10nm sram bitcell size: 0.042 µm²  
  * Intel 10nm sram bitcell size: 0.031 µm²  
  * TSMC 7nm sram bitcell size: 0.027 µm²


Thank you for actually showing feature size metrics, it has been a bit maddening to see people say 'you can compare them directly' but then I can't find a lot of comparisons about any kinds of features.



Meta: indenting is for code, and very hard to read on mobile.


Tsmc "7nm" and Intel "10nm" are roughly identical. Intel random logic is a bit denser thanks to COAG, SDG and BEOL and TSMC SRAM is a decent bit denser, that's mostly it.


No, TSMC 7nm is slightly ahead of Intel 10nm. Not 3nm ahead, but still, better.

Also, one exists and the other is a broken hunk of silicon. That's the most important metric.


The consensus the last time there was a delay was that TSMC's 7nm process is roughly comparable to Intel's 10nm process.

They're still ahead, but only until Intel finally ships 10nm.

I'm not sure how much this affects the industry in general, but I bet there are some interesting Intel vs Arm discussions being fueled in Apple's product roadmap meetings.


This is still a massive change. Intel used to be 1-2 generations ahead of everyone else. Now they're slightly behind.


Oh sure. I just like to be excited for the right reasons.

If we've gone asymptotic, it makes me wonder what other companies will be able to field a chip that's 90% as good as Intel in the next decade...


TSMC hasn't shipped 7nm yet. Nor has actual production of chips began.


High-volume production started in April: https://www.anandtech.com/show/12677/tsmc-kicks-off-volume-p...

I don't believe a customer or customers have been announced publicly yet, but the rumour mill seems to think Apple's A12 will be one of the first to ship on TSMC 7nm.


True, but if all goes according to plan TSMC will have 7nm producing and shipping any moment now and ready for iPhone in September.


I think intel tried to go too deep at once thinking it had time to do so and now that others are reducing the distance bit by bit it looks like they're late


7 nm... 10 nm... all marketing bullshit, at the end of the day. Only way to know is compare the raw, different aspects of nodes.


Well they have a lot of things to care of before to release the next CPU

https://www.intel.ca/content/www/ca/en/support/articles/0000...

In particular security issues related to the IME:

https://en.wikipedia.org/wiki/Intel_Management_Engine


I hate that thing more than anything else in computing.

Windows 10 spying pales in comparison to the theoretical capabilities of the ME.

There was this brief campaign to try to get AMD to let users control or disable their ME equivalent, but it never went anywhere.

I would kill for a modern computing platform without this crap built in. Soon forced DRM will go through it as well.

It's absolutely shocking to see how much control people (and even companies) have surrendered to Intel.

I'd recommend only buying libre / coreboot compatible hardware (or librem, or system76 systems) until users can gain more control over the Intel ME.


In the server/workstation space you can also opt in for an expensive Raptor Talos II system (but you'll get Power instead of x86). There is very little competition in the space for open/libre high end computer hardware systems.


FX8370 or FX93xx but the last gets very hot.


Note this article is guessing and it's very likely they are guesssing wrong.

https://seekingalpha.com/article/4190920-intel-intc-q2-2018-... what was actually said is this:

> we continue to make progress on 10-nanometer. Yields are improving consistent with the timeline we shared in April, and we expect systems on shelves for the 2019 holiday season.

This article guesses this to mean Cannon Lake.

In reality, it almost surely means Ice Lake.


The secret behind 10nm technology:

https://www.asml.com/press/press-releases/earnings-growth-co...

“Outlook For the third-quarter of 2018, ASML expects net sales between EUR 2.7 billion and EUR 2.8 billion, a gross margin between 47 percent and 48 percent. R&D costs of about EUR 395 million, SG&A costs of about EUR 120 million. Our target effective annualized tax rate is around 14 percent.”


I don't believe ASML is holding up the Intel 10nm. It was a problem in Intel's own making. Their EUV equipment are mostly related Samsung 7nm, GF 7nm, and TSMC 7nm+.


It's so strange how this is all playing out almost exactly like the Athlon XP days except with lots of cores and clouds.


Does anyone believe there's another core duo type massive leap out there to even be had this time around though? AMD is going to slowly pull a bit ahead of intel, especially in bang for buck, but i don't see how they can catch back up.

As it is, last time intel jumped ahead it was by repackaging and souping up an a mobile arch that in and of itself was based on the old pentium 3 arch. They can't really mine the parts bin that way this time around either.


The next big leap in computing is accelerators. Except getting accelerators right is as much a software investment as it is a hardware investment. That's what Nvidia got right, and why Nvidia is being so successful despite the fact that their hardware has massive limitations.

Of course, Intel also appears to be forgetting that they have the ultimate advantage in getting to accelerators--they can dump the accelerator in the same socket, Nvidia can't--since they keep trying to compete by using expansion cards despite that being the worst limiting factor for offload.


Fun fact, The Pentium III had an identical pipeline to the Pentium II with added SSE instructions and the Pentium II was just the Pentium Pro pipeline with added MMX instructions. Intel started work on the Pentium Pro back in 1991.

Intel's current architecture is just small incremental improvements to a 27 year old design. The biggest features they have added are Nehalem's hyperthreadding (which they borrowed from Pentium 4) and Sandybridge's uop cache (loop cache).

All their other performance improvements have come from small micro-optimisations and just chucking transistors at the design. Better branch prediction, better load/store buffers, more load/store buffers, more execution units, bigger reorder buffers, wider instruction decoders, more renaming registers, wider instruction retirement, more unique/specialised instructions and of course, clock speed improvements. But it's still fundamentally the same design which intel could have done back in 1991 if they had the silicon budget.

And this was absolutely the right design decision, everyone else is copying the same design. Apple copied it with their Cyclone/Typhoon/Twister arm CPUs. ARM copied it with their Cortex A57, A72, A73 cpus. AMD copied it with their Zen architecture (after trying out new ideas with Bulldozer and failing). IBM and Sun/Oracle have server class CPUs with more or less the same ideas, but 4 to 8 way "hyperthreadding".

AMD's Zen might pull ahead slightly, but that's only because their design is newer and hopefully free of cobwebs than Intel's design.

But there are still plenty of new CPU design ideas out there, It's quite possible we will see another 50% jump in single threaded performance.


If Intel goes multi-chip in the same package like Ryzen, then yeah, they can get more cores in, similar to Ryzen Threadripper.

So there is a leap coming for Intel as well if they go that route, and after the massive success of Ryzen, they are likely going there. Just easier to scale core counts if you can use smaller dies and just use a lot of them with an interconnect fabric.


Intel had two advantages:

1. Relentless process improvement, putting them 1-2 generations ahead of everyone else.

2. Using their monopoly to prevent OEMs from shipping AMD processors.

#1 isn't true anymore.

#2 is almost irrelevant since mobile has taken over and makes the PC revolution look like child's play by comparison.

Intel doesn't have a good graphics story. They sell hot power-hungry chips. They no longer have a process advantage. Their monopoly in x86 is increasingly (but not yet completely) irrelevant.

I'm sure they can ride the x86 profitability horse for at least 5-10 years. What then? Remember: RIM was profitable for several years after the iPhone was released. That didn't save them. Microsoft continued to rake in cash with Windows/Office as iOS/Android/AWS/GC established a new world order that threatened to make them irrelevant.


Intel needs to pull a Microsoft.

Basically the new Microsoft CEO has reinvigorated Microsoft by challenging the old preconceptions and doing what is right in the current environment. I wonder if Intel could find a CEO like that.

I still can not believe that my favorite code editor on Linux is an open source code text-oriented editor from Microsoft (VS Code.)


3. Yields. Historically Intel had the fastest ramp up to the highest yields.

I don’t know if that still holds true.


Intel's yields at 14nm++++++++++ are exceptional now. We'll see how TSMC's 7nm holds up though. They're still using DUV lithography with boatloads of multipatterning. Intel's 10nm is running into issues due to that, I'm curious what TSMC is doing correctly. They might just be saying they have good enough yields at 7nm and be totally full of shit. One thing about Intel's scale is they have to acutally get it right before they can say they're successful at it.


Without coming up with a pretty new overall design, Intel will struggle with that: the IO capabilities of Zen are much better than those of today's Xeons. And the more chips, the more sockets, the more chip IO. On top of that, as the throughput requirements for NICs, accelerators, storage (ssds and nvram-type devices) keeps increasing very quickly, that'll put further pressure on such a design.

Don't get me wrong, it seems likely that Intel is considering such options, but they won't be all too amazing for many workloads.


They do need to come up with a new design.

Intel hired Jim Keller that developed AMD's HyperTransport and the Zen's interconnection fabric and was intrumental in the Zen project a few months ago.


Looking at CPUs alone is probably too narrow of a focus. AMD bet the farm on a CPU architecture, and thankfully they appear to be winning, but they don't have competitors for most Intel products: FPGAs, Optane, some of the new AI chips, etc., and consequently Intel could still win in terms of ecosystem in the eyes of the big Cloud providers.


except ARM is nearly a viable competitor, too now.

Windows 10 ARM laptops are already shipping.


From the original article:

>In the second-quarter results, Intel said that its 10-nanometer yields are "on track" with systems on the market in the second half of 2019. Krzanich's previous perspective wasn't specific on whether they would arrive in the first half of next year or in the second half. On the conference call with analysts on Thursday, Swan was more specific and said products would be on shelves in time for the holiday season.

>Murthy Renduchintala, group president of the technology, systems architecture and client group, said on the call that the products that will become available in 2019 are client computing products, whereas products for data center use will come "shortly after." The stock fell further after those comments but later rebounded as executives talked about ongoing research and development for next-generation 7-nanometer technology.


>Murthy Renduchintala, group president of the technology, systems architecture and client group, said on the call that the products that will become available in 2019 are client computing products,

This sounds to me like Apple telling Intel. We plan to ship new MacBook, MacBook Pro, iMac and Mac Pro after 2019 Keynote. If you don't deliver we will kick all of Intel's product out of our roadmap. Including the modem business, which although offer no profits value but lots of investor are looking at it.


I wonder how cloud computing providers feel about this. On the one hand, competition between Intel and AMD for the cloud probably means cheaper chips for them. On the other hand, they were probably looking forward to the power savings from Intel's 10nm chips.


At the volumes they buy, whatever they pay is much different and bears little relationship with what we pay. The prices to beat are acquisition + energy + real estate / performance over product lifetime.


Azure has started using ARM chips with Windows Server...


And AMD's stock is up $3


Kicking myself for getting out at $17. Oh well, it was a good run, I've been holding it since it was $4.


humblebrag


Cry me a river. I'm still waiting for the day when our productivity scales with computing power (or node size). [1]

Until then, the drive toward smaller/faster/whatever-other-superlatives CPU's is just people running on a hamster wheel.

[1] https://foundersfund.com/the-future/#/artificial-intelligenc...


Could someone ELI5 why smaller/denser feature size and layouts is the way we achieve higher and higher speeds? Is it mainly about efficiency and heat?


As I heard, it is mostly about heat. However, there is also a distance component. It takes an electrical signal about 3 * 10^-9 seconds to cross 3cm, about the size of a CPU. That is roughly one clock tick on a 3GHz processor. Add in 'fill time' (you need some time for the voltage to build and stabilize) and moving signals across a CPU becomes a challenge. My speed calculations used the speed of a signal in a copper wire, not sure how the minute size and perhaps even angles of CPU wiring affect that.


Where does Intel get their codenames from?


Landmarks in Oregon, the design team is based in Oregon and gets to choose the names.


It's not just Oregon. Haswell (4th Gen) is based off of Haswell, CO (Intel has a presence in Fort Collins, CO) and Kaby Lake (8th gen) is based off a lake in Canada.


I'm from Colorado and never heard of Haswell. I'm pretty well acquainted with mountain towns, but Haswell is a tiny town in eastern Colorado. Intel chose it because of the easy to pronounce name.

https://www.denverpost.com/2013/06/01/intels-newest-processo...


Yes, well after a couple decades they started to run out of landmarks :)


Nearby geological features.


They should name the next devices after Boring, OR


I believe Elon Musk has a trademark for that.


Reason being that using names of such public landmarks avoids potential trademark infringement that might be triggered by almost any other names, given the number of trademarks in existence today.


Trademarks are domain specific, there aren't that many that they could infringe on. They could make Comet branded CPUs and unless they packaged them up in green cylinders there's no chance people would mistake them for the cleaner.


Intel are in more markets than just CPUs. And any time they enter a new market, all those trademarks from that market come into play. That's what I heard from the Intel folks that described the reasoning.


AMD be like "C'mon Intel, you're making this too easy."


Smart phone profit paid the "lion share" of 7nm development. That makes AMD's selling of it's fabs look like a brilliant business move. At the time IIRC AMD sold them in desperation to stay afloat. So maybe just a "lucky" turn for AMD.


I wonder what Jim Keller has to say about this, wasn't he hired to oversee the introduction of the 10nm manufacturing process?


I don't think so, Keller's specialty is microarchitecture, not semiconductor node processes.


The end is nigh Intel. Just like Apple switched to Intel Power PC’s were not upgrading, so will Apple switch to ARM.


Macs aren't a huge percent of Intel's processor business. It's a big enough percent that they would miss it, but Apple's desktop and portable market share is still small. And then there are servers.


True, but Apple releasing a truly desktop-class ARM into a MacBook might finally kick Qualcomm and PC laptop manufacturers into gear to releasing something that doesn't suck (like when Apple released a 64-bit ARM CPU and Qualcomm first made fun of it then quickly released their own)


Macs are even a significant percent of Apple's processor business.


This is true, however Apple are probably the single most visible sales channel for Intel processors in the eyes of consumers. I’d expect this to have a much bigger impact in the markets because of the publicity of the change more than anything else.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: