Hacker News new | past | comments | ask | show | jobs | submit login
TSMC Details 5 nm (wikichip.org)
198 points by baybal2 on March 22, 2020 | hide | past | favorite | 101 comments



The thing that really jumped out at me was making the channel partially out of germanium. Velocity saturation has been a limiting factor in our tiny modern transistors for a while, playing its own minor part in the breakdown of Dennard scaling. The article didn't say if the increase applied equally to both electron mobility and hole mobility. Germanium tends to be better for both but I my materials science background isn't good enough for me to guess what it'll be in a Silicon/Germanium lattice. Temperature dependence is also an equally interesting question since Germanium tends to deal with high temperatures a bit better.

This won't help with leakage issues or the increasing importance of wire as opposed to transistor capacitance. But it does look like a nice little win.

Speaking of wires, is TSMC still using straight copper or are they thinking of incorporating cobalt? Or did Intel's experience with that scare them off?

EDIT: And speaking of cobalt again, I really hope they aren't being too ambitious with this and that being too innovative doesn't bit them the way it did with Intel at 10nm. Though the rumor mill isn't sure if it was the cobalt wires or the Contact Over Active Gate (COAG) that was their main problem.


> The article didn't say if the increase applied equally to both electron mobility and hole mobility.

Electron mobility is not high in silicon, but it has never been a "blocker" before. In fact, the utilisation of strained silicon has gone down from 40nm in later nodes.

Silicon however has truly terrible hole mobility, and the wider industry has just started to realise that they have lost many years to fruitless pursuit of high electron mobility, when the real blocker has always been a pFET.

It was very prudent of TSMC to quietly continue pFET RnD, despite industry's fixation of HEMTs and 3-5 devices.

p.s. 5N is a COAG process from what they have just said.


The amount of Ge and thus strain has increased every node. All to make hole mobility better.

No one in industry was confused about HEMTs, and the needs of pfets were well known.


> The amount of Ge and thus strain has increased every node.

Not really from what I know. The compressive strained channel was dropped out for some FinFET designs, and I think tensile strained too in some because it was exploding litho layer count. Can you tell your source?

Compare the amount of money spent on truly moonshot material science like making 3-5 logic, and the amount of money spent on continuous improvement of pfet performance for mainstream applications.


From device cross sections and working at both IDMs and suppliers. And reading and visiting IEDM for a decade. You can keepmup with ost on semiwiki: but really, just work under the assumption that people in the field are not stupid and can do math.

Tensile strains has no litho impact, and counting layers in use was a job I had.

Who spent on 3-5? It got press coverage, but never was part of much logic research. It was a focus for power, laser, and led work.

Tsmc, Intel, applied, asm, and others spent vastly more on r&d than any government or academic work, and it isn't published.


I'll give you that. I haven't been involved with device and process development since around 2009, when I lost hope getting into process engineering.

So, were strained channels used with FinFETs on anything mainstream? I heard the news of number of foundries dropping straining at around the time of first finfets.


SiGe transistors have been mainstream in RF electronics for decades.


And telecom hardware, right? I think anything with high frequency requirements has been germanium forever.

I’ve always wondered why there wasn’t more germanium in my computer. I’m sort of wondering if that will change with chiplets. Say for instance a germanium MMU or cache.


You and the parent poster are probably thinking of heterojunction SiGe bipolar transistors.

SiGe bipolar transistors are fabricated on top of old CMOS processes and are relatively cheap from a fixed-cost perspective but not highly integrated and only cost/power-efficient for niche PHY layer applications.


171.3 million transistors per mm^2 is mind boggling. Were Apple's A14 to have the same area as their A13 (98.48mm^2) and maintain that density throughout, you're looking at ~16+ billion transistors. Some comparisons:

- Apple A13: 8.5 billion

- Apple A12X: 10 billion (iPad Pro)

- Nvidia Titan RTX: 18.6 billion

I wonder what kind of use cases are unlocked with 16 billion transistors in your pocket. This node might be the one that gets us to AR.


There is no Moore's Law in optics & batteries.

The AR everyone imagines is much further away than most realize.


Agree, the popular idea of AR going around - wide FOV, fancy graphics, good occlusion of incoming light, tiny footprint - is unlikely for a long time due to optics. I think it’s foolish to try to execute that vision now, and why I’ve always thought Magic Leap’s 'do it all' approach was completely wrong (and demonstrated they didn't really have a clear vision for what would make AR mainstream). The only way to achieve that vision for AR is to increment toward it over a long time.

I see v1 as a simple narrow FOV heads up display in glasses frame with very focused use case around place based notifications and heads up information. No input at all, except Siri. Display only turns on when necessary. iPhone is used for input and config, much like Watch was initially.

Even getting to this v1 is going to be a huge challenge in such a small form factor, but a chip with this density will help with power budget. This node consumes 30% less power and fits in an even smaller space.


I’m still half convinced that Apple's AR wearable won’t need any optical breakthroughs, because it won't use optical compositing.

If there’s one thing Apple's mobile engineering is uniquely good at, it’s building a low-latency pipeline. I think they could get the camera-to-screen latency low enough that they could build an AR wearable that is essentially a VR headset with wide-angle cameras on the front: This would enable the high-fidelity compositing that you already see on iPads and iPhones, basically leapfrogging everything else in the AR space without having to invent novel optical solutions.


There's also the non-technical aspect that Google, Facebook or some other advertising giant would probably co-opt the technology immediately to stuff it with ads. That alone turns me off the concept completely. It's a sad state of affairs. With Google Glass on one hand and Facebook's Oculus on the other it's pretty clear that they're aware of the potential of such technologies.


Sure, but that's a bit of a theoretical argument. The ability for at tech to be co-opted isn't something that prevents a v1 launch. That's something that happens later.

Google isn't going to be the one that creates breakthrough AR because that both requires sustained effort over long periods of time and being savvy about what mix of capabilities and details consumers will like. Google is structurally disadvantaged in both of these, which is why Glass has been a failure to date.

Facebook could make the hardware, they have a good research group. But they're at a big disadvantage for meeting core use cases because they lack any maps team, and place-based info is critical to early AR success. And as a company they're engagement-oriented, not really usefulness-oriented. AR is fundamentally a usefulness product, not an engagement product. VR is much more about engagement.


Siri-only input would make it a joke device. A device like that should be multi-input. Voice, mouse, keyboard, your watch, phone, ipad etc.


What I meant was input via the device itself. You can't expect people to press buttons on their glasses to get things done, that'd be a bad experience. The only other hands-free input method is gaze tracking, which would be great, but it may not make it into a v1 product.

Obviously there's input possibilities from other devices, and I mentioned iPhone. Unlikely mouse and keyboard input for a long time - AR's unique offering is freedom of movement, not being tethered to a desk.

Again, a v1 really has to be a minimal product that is first and foremost a heads-up display for a narrow set of use cases. Else it'll be too bulky or awkward - just look at the rest of the AR devices out there.


From what I heard, the one they work on are more of a pure VR goggles, without AR component besides basic depth perception, and transparency.


That sounds vaguely like Google Glass; I wonder if a comeback would be on the horizon now that voice assistants are more widespread.


Could be, but I don't believe in Google's ability to create a category-defining consumer product. They're an enterprise company that thinks they're a consumer company, there's a lack of understanding humans in everything they do, and they never put enough wood behind each arrow.


They’re not even enterprise! They’re an advertising company that believes they are changing the world. They are right, just not for the reasons they tell themselves.

Microsoft was an operating system and word processor vendor who wanted to be many other things. The only other thing that ever turned a clear profit in a decade was their hardware division, which I gotta say was well deserved. But all of their money came from tech, and selling that tech. Google isn’t even that.


> There is no Moore's Law in optics & batteries.

Our optics are already quite good, I don't think we need higher resolution cameras, just higher-speed cameras. I don't think this is impossible, we just don't have the processing power to deal with them yet.

For batteries, we already have some hope with the solid-state batteries of Goodenough.

We would still need more compute than this I believe, but this should get us close.


AR's optics limitation is not cameras, it's the display. That's a tough optics challenge because you have to block incoming light (very hard) and display the data very close to the eye (hard) in a form factor that is appealing (very hard).


> form factor that is appealing

I don't think that's so difficult--once you can play games outside using headsets I think they will grow on people. Right now, as a fashion statement they represent people who sit at home and don't go outside. Not that there's anything significantly wrong with that, except it doesn't represent the majority.

Once we have good AR applications which work outdoors, the whole game will change.


If you had an extremely low latency lightfield display and camera that allowed the eye to do the focusing then you could skip the light blocking part (not that that makes the optics challenge easier).


For anything close to the resolution the tech demos have people expecting, a true light field display, if we could even make one that fit in a svelte headset, would require hundreds of gigabytes per second of bandwidth. There is obviously a lot of redundancy in that signal (many "overlapping" views of the same scene) so there could be lots of opportunity for compression, but then you just spend more of the incredibly constrained battery budget on the CPU/GPU.

Add SLAM, CPU/GPU, Wireless radios... and do that all day on just a <5Wh battery (limited by a form factor anything close to a regular pair of glasses).


I don't think blocking light is essential. Even so LCD for blocking and micro LEDs might work.


Clearly it’s not essential, as Microsoft and Magic Leap have brought products to market without it. But the experience of basically wearing sunglasses indoors while your AR imagery shines glaringly bright against its surroundings is hardly ideal.

If you’re aware of any applications of LCD to accomplish optical subtraction in AR, I’d love to read up on them.


AR display = optics.


As a former mobile computing nerd and sometimes super capacitor investor, I learned a few things:

The rule of thumb in batteries is about 6% a year, so doubling every 12. This hasn’t changed much if at all, and may actually be worse, since the very best AA rechargeable I could find 20 years ago was 1600mAh and that would mean I should have 5Ah AA’s now and are we even in the ballpark?

So if you saw new battery research that doubled battery density, either you would see it in about ten years or not at all. Goodenough is 94 which means one way or another, Braga (his collaborator) will have to finish this.

Also, important rule for mobile power: if someone mentions power density as one number instead of two, they have a horrible secret they don’t want you to know. Power density is measured per unit volume and per unit mass. There are very heavy small batteries and very voluminous light batteries and either one creates design issues for a portable device. You want a small light battery and not all breakthroughs improve this, and so may be limited to stationary applications.


And yet... https://youtu.be/52ogQS6QKxc

There's no fundamental physical limit that forces AR/VR headsets to be big and heavy.


Batteries have also been improving exponentially, it's just a much slower factor of 5 to 8% a year. I haven't seen the same data for optics but wouldn't be surprised if the situation was the same.


There is a somewhat equivalent to that for batteries: Coulomb’s law [1]. The energy contained in a battery increases exponentially the closer the anode and cathode are to each other.

Due to this, scale and other reasons, we’ve seen batteries become 15-20% cheaper year over year [2] for at least the last two decades.

[1] https://en.wikipedia.org/wiki/Coulomb%27s_law [2] https://data.bloomberglp.com/bnef/sites/14/2017/07/BNEF-Lith...


Wait, what?

It's linear, and it only applies to capacitors. Batteries are a completely different beast.

The cost reduction we have been seeing for batteries has completely different reasons, and no relation at all with that.


Divided by distance^2 - looks inverted exponential to me.

It does apply to spooled batteries, which is pretty much most modern batteries but most importantly Li-ion batteries


> Divided by distance^2 - looks inverted exponential to me.

Quadratic growth is very, very different from exponential growth – both from a theoretical as well as a practical perspective.


What you've linked has essentially nothing to do with the discussion. What the person who replied to you thought you were talking about was the equation for energy stored in a capacitor, which is proportional to inverse of distance.


It absolutely has. It’s basic physics and it definitely applies to batteries or in fact any static charges, which is what is being discussed.


I'm not a physicist, but my understanding is that batteries do not use the electric field at all to store charge, unlike a capacitor. Batteries use chemical reactions to move electrons, not static electric charge.


> Divided by distance^2 - looks inverted exponential to me.

As stated in the Wikipedia article you linked to, that's an inverse-square law. That's a far cry from inverse-exponential.


The force is proportional to distance^-2, the resulting energy is proportional to distance^-1.


Batteries are electrochemical[1], and the amount of energy in them depends on the amount of active ingredients (for a given composition).

I guess the distance between the electrodes affect the internal resistance which can affect the effective energy output somewhat, but primarily it's the number of molecules that can undergo redox that affects the energy stored.

[1]: https://en.wikipedia.org/wiki/Galvanic_cell


> I wonder what kind of use cases are unlocked with 16 billion transistors in your pocket.

Compared to 8 billion? Realistically, it will let web developers write even more bloated Javascript SPAs without enough people complaining about the speed to improve it.


And Intel is now headed by someone with a 100% finance/business background. Intel is nipping at the heels of Boeing. Tens of billions of dollars in dividends and stock buybacks announced that should be going to saving their (and by proxy, the U.S.'s) manufacturing capability.


they have jim keller though


True, but it seems to me that he's been most successful at companies that are willing to place big bets on his team and put a lot of institutional weight behind him. His work at Apple, AMD, and Tesla all seem to point towards that conclusion. One wonders if an Intel that's less focused on hardware innovation will see the same effects.


The title is kind of misleading.

> TSMC made every effort to avoid detailing the actual properties of that channel (every related question was met with the tautology: “those who know, know”). [...] We believe TSMC is employing a SiGe channel for the pMOS devices


There is only one known high mobility material availing to easy integration with silicon MOS process

None of InP, SiC, GaN, GaAs or other 3-5 materials been ever integrated with silicon in a fab setting for IC production.

And only one material out there makes sense to use specifically for pMOS production: high hole mobility is much rarer trait than high electron mobility.


Isn’t MACOMs process GaN on Si, which is cheaper, but lousy thermal performance compared to GaN on SiC. I thought they were touting GaN integration with CMOS?


GaN on Si is a thing from a completely different ballpark.

Here we talk about material integration per-device basis, not entire wafer surface.


And that material is what?


Germanium presumably, as that is the material that it is mentioned that they are using.


Sounds like the details are still under NDA.

Military contracts come to mind - conservatively long, too long, and sufficiently bureaucratically mired that the company's own announcement came before the red tape had properly moved out of the way.

Or something.


> Our current estimates remain at 48 nm poly pitch and 30 nm metal pitch. Those dimensions yield an estimated device density of 171.3 MTr/mm².

I just sized up 1mm between my fingers.

Um. WOW. Really. Wow.


For reference a 64-bit Pentium 4 core was (drum roll) 125Million transistors. And it was 100x bigger (112mm^2)


Wow at that too.

To be honest, one of the things I was considering when posting the GP was the sheer impossibility of leveling anything more precise than hand-wavy, drunk opinions about the end-to-end security state of that much, well, entropy.

It's gotten almost like a bizarre version of inverted quicksand, where instead of it being bad because you're sinking into sand, it's bad because the sand is shrinking into nothingness from between your fingers.

The next few years are going to be VERY interesting, I think.

IIUC, OOo and branch prediction have kind of been the CPU engineering meal ticket workaround to "solving" the memory latency problem (IIUC), and now everyone apparently has to rethink that.

My favorite would be the "disappearance" of the A20 line though. As in, it's there, it's doing it's thing, but everyone collectively forgot it existed, and now everybody needs a microcode update and an SGX cert respin AGAIN.

And all this without access to specialized tooling (the kind that evolves over a decade, regardless of knowledge), AND the fact that we're (apparently?) only forgetting about (seemingly?) little things at this point. Haha...ha...


On the other hand, I keep hearing people saying than nm is increasingly just a marketing number - and indeed, for all of Intel’s difficulties I don’t see too many chips beating Intel in single-threaded performance.

I say this as someone who bought a Ryzen-based system last year after significant research.


This is purely speculative but based on my own experience.

I firmly believe that real world performance is bottlenecked by cache before the slight difference in OoE/pipelines/speculation between Intel and AMD's CPUs. And the metric I really care about is the time it takes to read a cache line and the cache miss penalty. Those two fundamental pieces of the system are the ultimate bottleneck that developers can't deal with.

It's really easy to construct cases where one CPU beats another in either single or multithreaded performance. The hard thing to make a judgement on is how it looks in real world software, and the more cumulative benchmarks are more telling of the quality of the software you bench other than the CPU. Particularly because optimizing for cache is so difficult. It's trivial to hit the memory wall and decide that's it, but it's really hard to decide that your entire architecture was flawed, choice of language was wrong, choice of algorithm or data structure was wrong, etc, A/B two different implementations and pick one that's slightly faster. The ROI isn't there.

There's a philosophical point here though about the nature of "better" as it applies to the real world that makes my belief moot, because even if I'm right it when X% of software that people use runs Y% faster on a particular architecture it doesn't matter if it's because developers wrote and benched their code on that architecture. It's still "better."

All that said, I'm very happy with my Zen2 purchase.


real world performance includes software that can avoid cache misses (most games) and code that can’t (most compilers). It is good to make caches bigger and faster especially for less carefully engineered software which tends to have many layers of indirection, and a jit, and a garbage collector all putting pressure on the caches. smaller transistors helps us get bigger l3 caches which is something that helps zen2 compile code pretty fast! zen3 makes that l3 unified so it will be effectively bigger still

making l1 and l2 caches bigger is problematic as that also makes them slower.


Intel's advantage is architectural, not process. That architectural advantage has started to look extremely shaky after Meltdown/Spectre/Spoiler/etc - Intel gained a lot of performance through speculative execution at the cost of correctness.


the transistors are actually getting smaller, that just doesn’t lead to more clock rate any more


But less heat, which allows you to cram more stuff in a package, which is a win !


But nm isn't measuring transistor sie. nm refers to size of an ambiguous "feature".


I’m starting to (almost) feel sorry for Intel now.


Please Dont. They have plenty of cash, still the lead in IPC, (for now), Marketing Power to defend their position, long term contract and business relationship in Sever Market, and incentives to partners should AMD inches closer.

For the first time Intel's new CFO finally admitted their 10nm wont be as profitable. But that doesn't mean they are not profitable as some media is trying to spin it.

They expect WillowCove to be ~15% faster than their Icelake, speeding up ( cough ) their 7nm introduction in 2021. ( That is marketing speak for higher 7nm product volume launch ) and moving to 5nm GAA in ( late ) 2023, roughly a year earlier than TSMC 2nm GAA.

Their 14nm+++ is the 2nd highest performance node to date ( I think highest performance honour goes to GF with their IBM / 14nm node ), and plenty of capacity coming up shortly.

So while they may not be in node's leadership and likely wont regain that anytime soon. They are definitely capable to compete. If anything I felt sorry for AMD not getting higher Server Market Shares and higher Profits / Revenue. They have been working so hard and deserve a lot more.


And 3nm planned for 2022 . They are moving at an incredible pace. They will definitely hit a wall soon at this speed


Which is their plan, as many people believe.

They want to drive the few competitors they have out of business quickly before that happens.

Hence them accelerating node transitions in spite of not being able to recover as much RnD money as they can.

Only really high margin products contend for <14nm node capacity, and nothing else.

For as long as they maintain such high pace, and wide lead over competition, competitors get nothing, while still having to spend the same astronomical sums on fab retooling.


Surely they know that USG would never let Intel go out of business? It's too important from defence perspective. And Samsung isn't going anywhere either.


It only takes to blunt competition's resolve to make them drop from the race for the bleeding edge, and they will never close the gap again.

They will have to throw staggering amount of cash to close the gap of multiple <14nm nodes, but there will be no Qualcomms, Apples and AMDs to pay them for that if they are few years late.

In fact, even sub-40nm nodes are already a barren ground without much clients left. Those who still hang onto 22nm, 14nm only do legacy products, and they will inevitably move on, leaving 22 and 14 only fabs without much business.

Both 22 and 14 still require extremely costly mask sets, high cycle times, and large MOQs. Need very different equipment from 40nm+. And they can't support analog, embedded flash, power, ulp devices as good as 40nm+, which makes them unattractive to microcontroller makers.

For 99% of companies in semi industry, the scaling ended at 40nm.

That's cold hard truth to it.


I imagine that a Chinese manufacturer will eventually also play a role, since it's a strategic priority for the PRC to be self sufficient in terms of semi-conductor production.

The PRC (most probably) has the 'advantage' of a vast spy network in Taiwan, and (likely) in TSMC. Not to mention that salaries in China tend to be higher, leading to a brain-drain TW -> CN.


Everybody in the business knows the secrets anyway. Institutional knowledge is the thing that is terribly difficult to transfer.

Putting together a fab is money, money and more money combined with enough smart folks providing continuous effort to actually solve problems. It's the "solve problems" rather than "cover up problems" where China often breaks down.

People forget that the US didn't just magically become a semiconductor powerhouse. It was thanks to VHSIC, the VLSI Project, and other initiatives--a multi-year concerted effort by all the US semiconductor industry and the US Federal Government--because they were terrified of (don't laugh now) the Japanese.

DARPA gets all the press for the Internet, but the actual semiconductor technology that made the chips that powered the computers and equipment behind the Internet was probably far more important.


   Everybody in the business 
   knows the secrets anyway.
That's so interesting. How does this knowledge (the secrets) transfer between individuals, in a way that doesn't generalise to teams? I'm asking because I've just been asked to build a research team ... (and one of my goals is to open all our results).


The dirty secret of semiconductor manufacturing is that there is a LOT of cargo culting--so nobody knows what all the essential knobs are. This is true practically any time you have an extremely complicated process with lots of steps--repeatability is difficult--and it's not limited to semiconductors. It is exacerbated when the line starts making real money once it is up an running--nobody wants to touch anything for fear of breaking it.

For example, chemical mechanical polishing has some basic principles, but the full recipe for the polish process is voodoo. What kind of pressure profile? What formula of slurry? How long? You can steal the "full formula" ... but then discover it doesn't work in your fab (this is typical). Why? Who knows? Is this wafer too thick? Is the layer too hard? Is this polishing machine somehow different? Perhaps the machine operator learned to tweak the "documented formula" under certain conditions?

This is also why Intel makes a development fab and then stamps them out literally identically. That's what gives (gave?) Intel its absolutely insane yields.

A classic problem in this was the formula for the spacing/aerogel material in US nuclear bombs. The factory shut down, but nobody really noticed until they ran out of material. By then the factory was long enough gone that a new one had to be built--and, of course, the new material was useless. They had to embark on a research quest to figure out what significant information had been overlooked.

Another good example is injection molding lines. Lots of the time, when there is an injection molding problem the first solution is to lengthen the injection molding time by 10%. The people on that shift simply shrug and go on with life. That wasn't actually the problem (for example, maybe the incoming plastic pellets were too wet) but the people with a production quota don't care. It probably takes an old dude with a grey beard to grab a handful of plastic pellets and test them for water content, yell at the people who were supposed to prepare the plastic bits, and dial the speed back up to where it is supposed to be. That old dude is your institutional knowledge. If you don't have that grey beard, you simply lost 10% production capacity permanently. Have enough of those events and your injection molding company goes bankrupt.


This is really interesting, I didn't know any of this. Do you work in this field professionally?


Won't analog design advance until it's able to use these smaller nodes?


Some analog devices are feeling much better on planar processes for lack of decent finfet implementations.

If you want to add layers to just make analog devices, it's not so cheap, and will damage an already bad yield.


>They want to drive the few competitors they have out of business quickly before that happens. Hence them accelerating node transitions in spite of not being able to recover as much RnD money as they can.

That is lots of accusation with little to no evidence.

If you have read any of their investor note, meetings, They have been extremely conservative, and executing to perfection . They are profitable, and does not mind competing. Every single leading node they are charging more to their customers to recoup their R&D. Hence the theory why Moore's law will stop working once their customers can no longer afford leading node.

There is nothing high pace about it, they are having a new node every two year, exactly as you would have expected from Intel. The only difference is Intel messed it up since 2015 / 2016. And now they are in the lead.

Compared to Samsung which is basically funding their Foundry with NAND and DRAM Profits. I dont see how TSMC can be blamed for anything.


> There is nothing high pace about it,

16FFC - late 2016

12FF - Summer 2017

10FF - Late summer - autumn 2017

N7 - April 2018

N7+ (which is really like a standalone node) - Early 2019

N5 risk production - March-April 2019

N5 mass production - very likely first tapeouts were to take place in coming weeks if not for the virus.

> If you have read any of their investor note, meetings, They have been extremely conservative, and executing to perfection

Why do you privy somebody like investors to your most important strategies?

While they were a company with reputation of squeezing out everything from a given node, they truly shifted focus to HPC and top of the cream orders now for sub-40nm market.

Anything below 40nm other than the latest node is not getting anywhere near as much attention from them as when they were a "mainstream" fab.

This is the most logical strategy now because there are very few companies in the whole world with money for sub-40nm tapeouts. It makes sense to keep these clients captive at all costs, to not to let competitors any part of this very small pie.


>16FFC - late 2016......

2012 28nm

2014 20nm

2016 16nm

2018 7nm

2020 5nm

2022 3nm* ( Non-GAA )

Every node is full 2 year cadence in accordance to their customer ( Lately Apple ) iPhone releases. 10nm is Pre 7nm, 12nm is 16nm's optimisation node. Nothing High pace about it. As they have been doing this prior to becoming a "leading" node manufacture.


Where are they getting the money? Are they that profitable or state sponsored?


Profitable.

Would be very profitable, if they had the income from their advanced processes without the R+D expense for those processes.


They produce the entirety of Apple's Ax chips and AMD's lineup, plus the latest Snapdragons. They are shipping many units per each Intel CPU shipping and offer things nobody else can offer. I imagine that makes them really profitable.


> Are they that profitable or state sponsored?

The world now ships ~1.2B Smartphone every year, most of them Fabbed on TSMC. That is Apple, Huawei, Qualcomm, Mediatek. These four along is likely 80%+ of the market. Along with Modem, WiFi and dozen of other smaller components. That is excluding BitCoin, ASIC, FPGA, Gaming GPU, GPGPU, Network Processors etc all these market have exploded in the past 10 years and requires leading edge Fab all coming from TSMC. ( Well at least most part of it ). The market is so much bigger than Intel's 200M+ PC market and Server Market in unit volume, TSMC has been able to benefit from it.

And once you spread the R&D over a much larger volume of products, your unit cost economics drops. Hence benefiting everyone in the industry.


GF already bowed out ... only really Samsung and Intel left now, no?


[flagged]


Why don't you graph TMSC's historical node sizes and forecast what their size will be when Taiwan's population shrinks to, say, 10m. Will it be below the size of a single electron already? Then look up RAA in Wikipedia and consider whether there might be anything wrong with your initial argument.


Guess: declining iq is measurement issues or simply globalization (people move around). Fertility rates seem to do ok where there is an ambition to keep them up e.g with long tax funded parental leave. Taiwan seems like it’s got the “Japan sickness”.


> High IQ is linked to lower fertility around the world (dysgenics)

Idiocracy really had a lot of things figured out a decade ago :(


(I'm not sure why, but the parent comment is currently at -3. I have decided to find this both ironically and non-ironically hilarious.)


I remember reading a lot about a "brick wall" when they pushed towards EUV, how challenging it would be to reduce feature size further etc etc. But it seems they're moving along just fine. What happened?


To put it into perspective, Coronavirus would just fit in 25 transistors long at this scale! However, to be fair, it has 30 kilobases where each base is 2.5nm X 0.3nm. So we still have some way to go...


Game, set, match. it's over for Intel. They sat on their asses for too long and now are getting eaten by the previous underdogs.

This shift is great news for literally everyone except Intel's execs' bonus packages.


I don't understand callous contempful comments against Intel's manufacturing capabilities. There are incredible people making things possible fomr both sides in one of the most advanced manufacturing processes in the world. I've worked inside a fab and I can tell you that the industry is much more appreciative of each other's competitors than the outside fanboys who are largely pissed off at the executive decisions (lawsuits , etc).

This article is about TSMC's technical achievements and your comment has not added anything to the discussion, time and again I see this rooting for "underdog" behavior all too the same (just flipped the sides) from 2006. I wrote about it here: https://news.ycombinator.com/item?id=22515546

If you'd like to know what goes into shriking a process node from lithography standpoint, I implore you to watch this: https://www.youtube.com/watch?v=f0gMdGrVteI

and be in complete utter awe... Tell me if they are "Sitting on their asses"?


> I can tell you that the industry is much more appreciative of each other's competitors

Totally agree, back in 2012 Intel, TSMC, Samsung pool together roughly 5 billions of investment [1] in ASML to manufacture the EUV machines needed to manufacture 5nm today.

[1] https://www.asml.com/en/news/press-releases/samsung-joins-as...


Reading bs release after bs release from intel about their 10nm “successes” makes you tired of their stuff. The number of times they denied problems etc


I don't understand callous contempful comments against Intel's manufacturing capabilities. There are incredible people making things possible fomr both sides in one of the most advanced manufacturing processes in the world. I've worked inside a fab and I can tell you that the industry is much more appreciative of each other's competitors than the outside fanboys

Where did you read contemptful? The OP was merely stating a fact that Intel has lost the manufacturing race big time which is hard to refute given TSMC's progress. Sitting on their asses is just a illustrative way of saying that.

It does not mean Intel people were stupid or they didn't appreciate their competing colleagues or that Intel would now be worthless. If anything it just makes more sense for a chip company to focus their efforts on designing chips and outsource manufacturing which basically is a separate business anyway -- unless they happen to have an edge also in that which Intel did have earlier.


I think you might be projecting because IMO everyone around here in HN knows that the engineers are doing a really good job -- but management gets in the way, all too often.

So let's not conflate excellent and talented engineers with greedy shady executives whose big PR move was to publish detailed benchmarks on the last-gen AMD literally days before the current-gen AMD dropped.


Oh, please ~ Intel fucked up their 10nm process pretty hard, and they kept trying to fix it for years. They also had to keep pushing their roadmap back again and again for 10nm.

This is entirely due to Intel's awful management getting in the way. Intel's engineers might be clever, but with shitty management, that can all go to waste.

Intel's management have been sitting on their arses, more or less, when you consider the fact that they've really been dragging their feet in terms of progress.

AMD has made more progress in 3 years than Intel has made in 10...

Intel simply got too comfortable with their monopoly.


I know that Intel has major issues with 10nm, but I don’t know exactly what went wrong.

Can you educate me?

I’m particularly interested in how management sabotaged the ability to get 10nm to work.


Wasn't Intel planning to become a customer of TSMC?


Intel CEO was seen in Hsinchu last year. There could've been only one company for him to visit there.


"Maximize shareholder value" strikes again. By the time we're through America will have lost its dominance in every sector.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: