Hacker News new | past | comments | ask | show | jobs | submit login
Global Foundries discloses 7nm process detail (semiwiki.com)
139 points by rbanffy on July 5, 2017 | hide | past | favorite | 60 comments



Yet another fab pushing deep ultraviolet lithography as far as it will go for 7nm, rather than going to "extreme ultraviolet". "Extreme ultraviolet" is really soft X-rays. The "light source" is either a synchrotron, or an incredible kludge where droplets of tin are vaporized by lasers. They also produce totally incoherent light, while the ordinary processes use lasers producing coherent light, which focuses better.

Deep ultraviolet light source: [1] Little box.

Extreme ultraviolet light source: [2] Two floors of equipment.

Nobody really wants to go to EUV with the existing sources. The industry hopes for an EUV source that isn't insanely expensive, incoherent, dim, and an operational headache. But there's nothing better coming along in the near term. Intel and Samsung have chosen to build EUV fabs, to be ready in 2019, maybe. Everybody else is trying hard not to.

[1] http://www.oxxius.com/LUV-series-266nm-280nm-CW-laser [2] http://www.anandtech.com/show/10097/euv-lithography-makes-go...


I know nothing about this area, so this is kind of an ELI5 question, but: If the wavelength of the light defines the lower limit of detail size, why can't they switch to actual x-rays, which we already seem to be really good at producing?


Not my field, but I'd guess that one problem is that the hotter the photon, the better it is at penetrating matter. EUV photomasks are already a real pain in the butt to make[1], and making them work for shorter wavelengths probably introduces even more excitingly intractable engineering problems.

1: https://www.nist.gov/sites/default/files/documents/pml/div68...


Masks may be a pain, but the pellicle (the part that protects the masks from contamination) is non-existent unless they've made a breakthrough I'm unaware of:

https://www.semiwiki.com/forum/content/3720-euv-pellicles.ht...

basically this means any dust or contamination of the mask by anything bigger than 50-80 nm will possibly ruin the maskset and destroy the yield until it is replaced. It's a huge challenge.


If you go too far into the X-ray range, the high-energy X-ray photons penetrate the masks.


One fun fact about EUV lithography that I learned recently: the masks and optics all work by reflection, since EUV photons aren't very "optical"-- you either have a thin film that EUV ghosts right through, or a thick film that stops it. Changing the type of atom doesn't much matter, there's not really anything akin to a pigment for energetic photons that's also thin enough to be used for nanometre lithography.

But EUV doesn't like to reflect much, either, so the mirrors are made of stacks of metal films, which absorb a lot of the light and need active cooling. So that's the reason EUV sources have to be so much brighter than LUV sources: the optics eat most of the light getting it to the wafer.

EDIT: elaborating on UV optics.


The optics are reflective because almost everything absorbs EUV.


Or rather, stuff either absorbs EUV or doesn't absorb it. I edited the parent comment to elaborate.


coool


I think it's that we don't have materials that can focus x-rays with refraction the way we do visible wavelengths. The alternatives are tricky.


Actually, x-ray lithography was a competing technology to EUV for a long time, but it has its own problems, not the least of which is going through the wafer itself and the fact that optics are ridiculously hard to make for it. Masks are already really hard to write and are projected to explode in price (not that they haven't already). Don't crush chip startups under mask costs ;)


>why can't they switch to actual x-rays, which we already seem to be really good at producing?

The energy level is unnecessary. The lithography process does a "step-down" via electromagnetic lensing which determines the feature size. This becomes exponentially harder to do as you decrease wavelength (i.e. increase energy).


IIRC, same reason you use a laser rather than a lightbulb in a laser cutter. Conventional x-ray sources don't focus anything like as well.


coherency/focusing


From [2]: To put it simply: in order to generate 13.5 nm EUV light in a special plasma chamber, you need a very powerful laser (because a significant amount of its power will be wasted); a generator and a catcher for tin droplets (in addition to a debris collector); as well as a special, nearly perfect, elliptical mirror. To make everything even trickier, since EUV light with 13.5 nm wavelength can be absorbed by almost any matter, EUV lithography has to be done in vacuum. This also means that traditional lenses cannot be used with EUV because they absorb 13.5 nm light; instead, specialized multilayer mirrors are used. Even such mirrors absorb about 30% of the light, which is why powerful light sources are needed. This level of absorption can lead to ablative effects on the mirrors themselves, which introduces additional engineering challenges. To learn more how EUV LPP light sources work, check out this video.

https://youtu.be/8xJEs3a-1QU


Also mW for Ultraviolet vs. around 100W for EUV


loved your insight into this, thank you


ASML is shipping their new EUV machines this year, I think we can expect some beautiful chips in a few years.


ASML point of view: "Litho today, litho tomorrow." On to 5nm! On to 2nm! That's 10 silicon atoms across.

[1] http://staticwww.asml.com/doclib/investor/investor_day/asml_...


I wonder of there are actually structures as small as this in the CPUs? Even in Intels 14nm process there are many things that are 26nm and upwards.


2nm is only 4 silicon atoms in a crystal matrix


Excuse my cynicism, but won't it just be an incremental improvement, like for all previous years (~2x as many transistors per square millimeter)?


In theory, it's squared, not linear. So you should have ~4x as many transistors.


"... insanely expensive, incoherent, dim, and an operational headache". i'm near bursting with off-topic jokes.


There's a perfect storm coming for Intel. For years they were able to sit on fat server margins because of superior fab process and CPU architecture. Now GF/Samsung/TSMC is competitive in fab technology and AMD has Zen. The days of high server margins are coming to an end.


And they probably know this. However if those margins were there, they can now invest in another technology their competitors can't or won't.


They've been trying with the low power server market (better $/W ratios) but I don't think I've seen much fanfare about their offerings. It's still largely been in ARM's court for that still, though it's still a niche space comparatively.


This will help everything equally, but there are much more gains to be made by moving compute closer to memory and benefiting from increased bandwidth. That's the technology I am expecting most gains to come from.


By moving closer you don't gain bandwidth, you reduce latency.


... and not by that much ...

... which is why projects targeting this are actually about scheduling tasks across nodes, not moving processing actually closer to the memory in the "let's put the CPU in the memory" sense.


Well, for many types of compute, transaction latency is everything. Every time you cut the difference in half, you cut the latency in half and get twice as many dependent responses. Shrinking the process produces huge improvements.

HBM exists for a good reason. You wouldn't stack dies (and hence reduce available thermal dissipation) if it didn't make a big difference.


It's in the name - HBM increases bandwidth. It does reduce latency a bit (since the electrical path is shorter), but the bigger and more fundamental latencies (row and columns) have not changed in many many years and are unlikely to. (It'd be possible to make them lower by proportionally higher power use and lower density, a trade-off which no one seems to be willing to make).


No, you do actually gain bandwidth. By moving the memory into the same package as the processor, you can add more data lines: adding 1000 more pins to a CPU is practically infeasible, while 1000 more signals between dies in the same package is relatively simple with the right technology.

In addition, due to the shorter distance and resulting benefits to signal integrity and power consumption, internal data lines can be clocked faster than those between chips.


Typically, improving latency also improves bandwidth, whereas typically, improving bandwidth hurts latency.


How is GF's 7nm process in comparison to Intel's 10nm process? Is Intel still much denser like with 14nm vs the rest of "14nm"?


Take a look at the table in the article; most everyone who is calling their next process "7nm" is roughly on par with Intel's 10nm process that they just booted up to mass production.

The nm wars are worse than the MHz wars because chip foundries are so reluctant to publish process details, but suffice it to say that Intel hasn't given up as much ground as companies like GF and Samsung would rather you believe.


Ars has a nice review[0] of the process improvements that Intel has invested in.

0: https://arstechnica.com/information-technology/2017/03/intel...


article table tl;dr: intel is roughly similar and in production right now, tsmc should be too, GF and samsung are at least 6 months out, probably a year or more ('risk production date').


There's a handy table in the article comparing them.


Always relevant for informational purposes when chip manufacturing is discussed:

https://www.youtube.com/watch?v=NGFhc8R_uO4


So now a transistor is only like ~35 atoms across!


No.

The feature size is 7nm. That's like saying the tightest radius in your CNC mill's tooling is 3mm. The stuff you make is far larger. Just look at the SRAM area: it's .0269 square microns. And SRAMs are tiny compared to "gate" RAMs, like you'd see in a book.


The first line in that slide still blows me away.

17 million gates per square millimeter.


Yeah. I'm gonna attempt a Fermi analysis to try give an idea of how much room is still "down there". 1 sq um is 100,000,000 sq angstroms. The unit cell of crystalline SI is probably around 50 sq A. That gives about 2,000,000 unit cells per sq micron. My guess would be about 5,000,000 "surface" atoms. The SRAM is .025 of that---perhaps 120000 atoms. SRAMs are cubes, so about 40,000,000 atoms cubically. Assume an SRAM is about the size of 8 transistors (that's a big SRAM): that's about 5,000,000 atoms per transistor. I'm going to fudge it down to 500,000 atoms, to be conservative.

The smallest logic I've heard of is a transistor using 7 atoms in toto. That is a factor of 70,000x, or about 16 generations.

Now, how we'll get there, I don't know. Also, I think the generation gap will probably widen to 5, or more, years. So... 75 to 100 years of "room"?


My understanding is that the active gate region is far smaller already so there is less room to scale than you might think.


The active region is smaller, but this is an argument about total bulk. IBM has shown functioning transistors with only a handful of atoms, already.


Yes, but that's the limit. Quantum effects mean we'll probably never see a single-atom transistor. Even if we did that's only a few more scalings. The rest of the features won't shrink as much anyway.

We can see the end of the road. AFAIK no one has any clue how to even start thinking about 2nm or beyond. No one is even certain we will hit 5nm.


At what point does stray ionizing radiation make devices too error-prone to be useful I wonder?


Yes that is amazing, most biological components are bigger:

https://en.wikibooks.org/wiki/Cell_Biology/Introduction/Cell...


Most biological components function in 3D space, though, not on a surface.


https://en.wikipedia.org/wiki/Multigate_device#Tri-gate_.283...

But you're right, actual stacked chips ("3D" ICs) aren't used much, largely for thermal management reasons.


I understand and agree, but most biological components at a sub-cell scale are membranes and pores in membranes, or even linear or circular structures like RNA/DNA.


IANAB but wouldn't most biological components in a cell be proteins (DNA codes for tens of thousands of proteins)?


Sorry for the delay to answer.

Proteins are indeed incredible little machines and for someone with a computer background, they are the most interesting entities in the biological zoo. And proteins may be very large, some are hundreds of nm.

But by seing proteins as components, one misses the transport and signaling aspects which are important inside and outside the cell. It is a problem of scale, if we focus at some scale level, we miss what is going on at a smaller as well at a larger level. I think there is no easy solution to that problem.


Good to see I wasn't the only one to look at the table of atomic radius to check this.



I wish we had a comparison of drive currents but I'm sure that information isn't publicly available yet.


I don't follow foundry details closely... what are the practical ramifications for moving to 7nm, and how small can they get before physics gets in the way?


Physics are already getting in the way, that's why we have FinFET, tri-gates, and likely GAAFET (Gate All Around) in the near future. All of these technologies in fab processes are already in high use to combat quantum tunneling and current leakage.


Physics has been "getting in the way" for 30+ years ;)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: