Hacker News new | past | comments | ask | show | jobs | submit login

I’m starting to (almost) feel sorry for Intel now.



Please Dont. They have plenty of cash, still the lead in IPC, (for now), Marketing Power to defend their position, long term contract and business relationship in Sever Market, and incentives to partners should AMD inches closer.

For the first time Intel's new CFO finally admitted their 10nm wont be as profitable. But that doesn't mean they are not profitable as some media is trying to spin it.

They expect WillowCove to be ~15% faster than their Icelake, speeding up ( cough ) their 7nm introduction in 2021. ( That is marketing speak for higher 7nm product volume launch ) and moving to 5nm GAA in ( late ) 2023, roughly a year earlier than TSMC 2nm GAA.

Their 14nm+++ is the 2nd highest performance node to date ( I think highest performance honour goes to GF with their IBM / 14nm node ), and plenty of capacity coming up shortly.

So while they may not be in node's leadership and likely wont regain that anytime soon. They are definitely capable to compete. If anything I felt sorry for AMD not getting higher Server Market Shares and higher Profits / Revenue. They have been working so hard and deserve a lot more.


And 3nm planned for 2022 . They are moving at an incredible pace. They will definitely hit a wall soon at this speed


Which is their plan, as many people believe.

They want to drive the few competitors they have out of business quickly before that happens.

Hence them accelerating node transitions in spite of not being able to recover as much RnD money as they can.

Only really high margin products contend for <14nm node capacity, and nothing else.

For as long as they maintain such high pace, and wide lead over competition, competitors get nothing, while still having to spend the same astronomical sums on fab retooling.


Surely they know that USG would never let Intel go out of business? It's too important from defence perspective. And Samsung isn't going anywhere either.


It only takes to blunt competition's resolve to make them drop from the race for the bleeding edge, and they will never close the gap again.

They will have to throw staggering amount of cash to close the gap of multiple <14nm nodes, but there will be no Qualcomms, Apples and AMDs to pay them for that if they are few years late.

In fact, even sub-40nm nodes are already a barren ground without much clients left. Those who still hang onto 22nm, 14nm only do legacy products, and they will inevitably move on, leaving 22 and 14 only fabs without much business.

Both 22 and 14 still require extremely costly mask sets, high cycle times, and large MOQs. Need very different equipment from 40nm+. And they can't support analog, embedded flash, power, ulp devices as good as 40nm+, which makes them unattractive to microcontroller makers.

For 99% of companies in semi industry, the scaling ended at 40nm.

That's cold hard truth to it.


I imagine that a Chinese manufacturer will eventually also play a role, since it's a strategic priority for the PRC to be self sufficient in terms of semi-conductor production.

The PRC (most probably) has the 'advantage' of a vast spy network in Taiwan, and (likely) in TSMC. Not to mention that salaries in China tend to be higher, leading to a brain-drain TW -> CN.


Everybody in the business knows the secrets anyway. Institutional knowledge is the thing that is terribly difficult to transfer.

Putting together a fab is money, money and more money combined with enough smart folks providing continuous effort to actually solve problems. It's the "solve problems" rather than "cover up problems" where China often breaks down.

People forget that the US didn't just magically become a semiconductor powerhouse. It was thanks to VHSIC, the VLSI Project, and other initiatives--a multi-year concerted effort by all the US semiconductor industry and the US Federal Government--because they were terrified of (don't laugh now) the Japanese.

DARPA gets all the press for the Internet, but the actual semiconductor technology that made the chips that powered the computers and equipment behind the Internet was probably far more important.


   Everybody in the business 
   knows the secrets anyway.
That's so interesting. How does this knowledge (the secrets) transfer between individuals, in a way that doesn't generalise to teams? I'm asking because I've just been asked to build a research team ... (and one of my goals is to open all our results).


The dirty secret of semiconductor manufacturing is that there is a LOT of cargo culting--so nobody knows what all the essential knobs are. This is true practically any time you have an extremely complicated process with lots of steps--repeatability is difficult--and it's not limited to semiconductors. It is exacerbated when the line starts making real money once it is up an running--nobody wants to touch anything for fear of breaking it.

For example, chemical mechanical polishing has some basic principles, but the full recipe for the polish process is voodoo. What kind of pressure profile? What formula of slurry? How long? You can steal the "full formula" ... but then discover it doesn't work in your fab (this is typical). Why? Who knows? Is this wafer too thick? Is the layer too hard? Is this polishing machine somehow different? Perhaps the machine operator learned to tweak the "documented formula" under certain conditions?

This is also why Intel makes a development fab and then stamps them out literally identically. That's what gives (gave?) Intel its absolutely insane yields.

A classic problem in this was the formula for the spacing/aerogel material in US nuclear bombs. The factory shut down, but nobody really noticed until they ran out of material. By then the factory was long enough gone that a new one had to be built--and, of course, the new material was useless. They had to embark on a research quest to figure out what significant information had been overlooked.

Another good example is injection molding lines. Lots of the time, when there is an injection molding problem the first solution is to lengthen the injection molding time by 10%. The people on that shift simply shrug and go on with life. That wasn't actually the problem (for example, maybe the incoming plastic pellets were too wet) but the people with a production quota don't care. It probably takes an old dude with a grey beard to grab a handful of plastic pellets and test them for water content, yell at the people who were supposed to prepare the plastic bits, and dial the speed back up to where it is supposed to be. That old dude is your institutional knowledge. If you don't have that grey beard, you simply lost 10% production capacity permanently. Have enough of those events and your injection molding company goes bankrupt.


This is really interesting, I didn't know any of this. Do you work in this field professionally?


Won't analog design advance until it's able to use these smaller nodes?


Some analog devices are feeling much better on planar processes for lack of decent finfet implementations.

If you want to add layers to just make analog devices, it's not so cheap, and will damage an already bad yield.


>They want to drive the few competitors they have out of business quickly before that happens. Hence them accelerating node transitions in spite of not being able to recover as much RnD money as they can.

That is lots of accusation with little to no evidence.

If you have read any of their investor note, meetings, They have been extremely conservative, and executing to perfection . They are profitable, and does not mind competing. Every single leading node they are charging more to their customers to recoup their R&D. Hence the theory why Moore's law will stop working once their customers can no longer afford leading node.

There is nothing high pace about it, they are having a new node every two year, exactly as you would have expected from Intel. The only difference is Intel messed it up since 2015 / 2016. And now they are in the lead.

Compared to Samsung which is basically funding their Foundry with NAND and DRAM Profits. I dont see how TSMC can be blamed for anything.


> There is nothing high pace about it,

16FFC - late 2016

12FF - Summer 2017

10FF - Late summer - autumn 2017

N7 - April 2018

N7+ (which is really like a standalone node) - Early 2019

N5 risk production - March-April 2019

N5 mass production - very likely first tapeouts were to take place in coming weeks if not for the virus.

> If you have read any of their investor note, meetings, They have been extremely conservative, and executing to perfection

Why do you privy somebody like investors to your most important strategies?

While they were a company with reputation of squeezing out everything from a given node, they truly shifted focus to HPC and top of the cream orders now for sub-40nm market.

Anything below 40nm other than the latest node is not getting anywhere near as much attention from them as when they were a "mainstream" fab.

This is the most logical strategy now because there are very few companies in the whole world with money for sub-40nm tapeouts. It makes sense to keep these clients captive at all costs, to not to let competitors any part of this very small pie.


>16FFC - late 2016......

2012 28nm

2014 20nm

2016 16nm

2018 7nm

2020 5nm

2022 3nm* ( Non-GAA )

Every node is full 2 year cadence in accordance to their customer ( Lately Apple ) iPhone releases. 10nm is Pre 7nm, 12nm is 16nm's optimisation node. Nothing High pace about it. As they have been doing this prior to becoming a "leading" node manufacture.


Where are they getting the money? Are they that profitable or state sponsored?


Profitable.

Would be very profitable, if they had the income from their advanced processes without the R+D expense for those processes.


They produce the entirety of Apple's Ax chips and AMD's lineup, plus the latest Snapdragons. They are shipping many units per each Intel CPU shipping and offer things nobody else can offer. I imagine that makes them really profitable.


> Are they that profitable or state sponsored?

The world now ships ~1.2B Smartphone every year, most of them Fabbed on TSMC. That is Apple, Huawei, Qualcomm, Mediatek. These four along is likely 80%+ of the market. Along with Modem, WiFi and dozen of other smaller components. That is excluding BitCoin, ASIC, FPGA, Gaming GPU, GPGPU, Network Processors etc all these market have exploded in the past 10 years and requires leading edge Fab all coming from TSMC. ( Well at least most part of it ). The market is so much bigger than Intel's 200M+ PC market and Server Market in unit volume, TSMC has been able to benefit from it.

And once you spread the R&D over a much larger volume of products, your unit cost economics drops. Hence benefiting everyone in the industry.


GF already bowed out ... only really Samsung and Intel left now, no?


[flagged]


Why don't you graph TMSC's historical node sizes and forecast what their size will be when Taiwan's population shrinks to, say, 10m. Will it be below the size of a single electron already? Then look up RAA in Wikipedia and consider whether there might be anything wrong with your initial argument.


Guess: declining iq is measurement issues or simply globalization (people move around). Fertility rates seem to do ok where there is an ambition to keep them up e.g with long tax funded parental leave. Taiwan seems like it’s got the “Japan sickness”.


> High IQ is linked to lower fertility around the world (dysgenics)

Idiocracy really had a lot of things figured out a decade ago :(


(I'm not sure why, but the parent comment is currently at -3. I have decided to find this both ironically and non-ironically hilarious.)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: