Yet another fab pushing deep ultraviolet lithography as far as it will go for 7nm, rather than going to "extreme ultraviolet". "Extreme ultraviolet" is really soft X-rays. The "light source" is either a synchrotron, or an incredible kludge where droplets of tin are vaporized by lasers. They also produce totally incoherent light, while the ordinary processes use lasers producing coherent light, which focuses better.
Deep ultraviolet light source: [1] Little box.
Extreme ultraviolet light source: [2] Two floors of equipment.
Nobody really wants to go to EUV with the existing sources. The industry hopes for an EUV source that isn't insanely expensive, incoherent, dim, and an operational headache. But there's nothing better coming along in the near term. Intel and Samsung have chosen to build EUV fabs, to be ready in 2019, maybe. Everybody else is trying hard not to.
I know nothing about this area, so this is kind of an ELI5 question, but: If the wavelength of the light defines the lower limit of detail size, why can't they switch to actual x-rays, which we already seem to be really good at producing?
Not my field, but I'd guess that one problem is that the hotter the photon, the better it is at penetrating matter. EUV photomasks are already a real pain in the butt to make[1], and making them work for shorter wavelengths probably introduces even more excitingly intractable engineering problems.
Masks may be a pain, but the pellicle (the part that protects the masks from contamination) is non-existent unless they've made a breakthrough I'm unaware of:
basically this means any dust or contamination of the mask by anything bigger than 50-80 nm will possibly ruin the maskset and destroy the yield until it is replaced. It's a huge challenge.
One fun fact about EUV lithography that I learned recently: the masks and optics all work by reflection, since EUV photons aren't very "optical"-- you either have a thin film that EUV ghosts right through, or a thick film that stops it. Changing the type of atom doesn't much matter, there's not really anything akin to a pigment for energetic photons that's also thin enough to be used for nanometre lithography.
But EUV doesn't like to reflect much, either, so the mirrors are made of stacks of metal films, which absorb a lot of the light and need active cooling. So that's the reason EUV sources have to be so much brighter than LUV sources: the optics eat most of the light getting it to the wafer.
Actually, x-ray lithography was a competing technology to EUV for a long time, but it has its own problems, not the least of which is going through the wafer itself and the fact that optics are ridiculously hard to make for it. Masks are already really hard to write and are projected to explode in price (not that they haven't already). Don't crush chip startups under mask costs ;)
>why can't they switch to actual x-rays, which we already seem to be really good at producing?
The energy level is unnecessary. The lithography process does a "step-down" via electromagnetic lensing which determines the feature size. This becomes exponentially harder to do as you decrease wavelength (i.e. increase energy).
From [2]: To put it simply: in order to generate 13.5 nm EUV light in a special plasma chamber, you need a very powerful laser (because a significant amount of its power will be wasted); a generator and a catcher for tin droplets (in addition to a debris collector); as well as a special, nearly perfect, elliptical mirror. To make everything even trickier, since EUV light with 13.5 nm wavelength can be absorbed by almost any matter, EUV lithography has to be done in vacuum. This also means that traditional lenses cannot be used with EUV because they absorb 13.5 nm light; instead, specialized multilayer mirrors are used. Even such mirrors absorb about 30% of the light, which is why powerful light sources are needed. This level of absorption can lead to ablative effects on the mirrors themselves, which introduces additional engineering challenges. To learn more how EUV LPP light sources work, check out this video.
There's a perfect storm coming for Intel. For years they were able to sit on fat server margins because of superior fab process and CPU architecture. Now GF/Samsung/TSMC is competitive in fab technology and AMD has Zen. The days of high server margins are coming to an end.
They've been trying with the low power server market (better $/W ratios) but I don't think I've seen much fanfare about their offerings. It's still largely been in ARM's court for that still, though it's still a niche space comparatively.
This will help everything equally, but there are much more gains to be made by moving compute closer to memory and benefiting from increased bandwidth. That's the technology I am expecting most gains to come from.
... which is why projects targeting this are actually about scheduling tasks across nodes, not moving processing actually closer to the memory in the "let's put the CPU in the memory" sense.
Well, for many types of compute, transaction latency is everything. Every time you cut the difference in half, you cut the latency in half and get twice as many dependent responses. Shrinking the process produces huge improvements.
HBM exists for a good reason. You wouldn't stack dies (and hence reduce available thermal dissipation) if it didn't make a big difference.
It's in the name - HBM increases bandwidth. It does reduce latency a bit (since the electrical path is shorter), but the bigger and more fundamental latencies (row and columns) have not changed in many many years and are unlikely to. (It'd be possible to make them lower by proportionally higher power use and lower density, a trade-off which no one seems to be willing to make).
No, you do actually gain bandwidth. By moving the memory into the same package as the processor, you can add more data lines: adding 1000 more pins to a CPU is practically infeasible, while 1000 more signals between dies in the same package is relatively simple with the right technology.
In addition, due to the shorter distance and resulting benefits to signal integrity and power consumption, internal data lines can be clocked faster than those between chips.
Take a look at the table in the article; most everyone who is calling their next process "7nm" is roughly on par with Intel's 10nm process that they just booted up to mass production.
The nm wars are worse than the MHz wars because chip foundries are so reluctant to publish process details, but suffice it to say that Intel hasn't given up as much ground as companies like GF and Samsung would rather you believe.
article table tl;dr: intel is roughly similar and in production right now, tsmc should be too, GF and samsung are at least 6 months out, probably a year or more ('risk production date').
The feature size is 7nm. That's like saying the tightest radius in your CNC mill's tooling is 3mm. The stuff you make is far larger. Just look at the SRAM area: it's .0269 square microns. And SRAMs are tiny compared to "gate" RAMs, like you'd see in a book.
Yeah. I'm gonna attempt a Fermi analysis to try give an idea of how much room is still "down there". 1 sq um is 100,000,000 sq angstroms. The unit cell of crystalline SI is probably around 50 sq A. That gives about 2,000,000 unit cells per sq micron. My guess would be about 5,000,000 "surface" atoms. The SRAM is .025 of that---perhaps 120000 atoms. SRAMs are cubes, so about 40,000,000 atoms cubically. Assume an SRAM is about the size of 8 transistors (that's a big SRAM): that's about 5,000,000 atoms per transistor. I'm going to fudge it down to 500,000 atoms, to be conservative.
The smallest logic I've heard of is a transistor using 7 atoms in toto. That is a factor of 70,000x, or about 16 generations.
Now, how we'll get there, I don't know. Also, I think the generation gap will probably widen to 5, or more, years. So... 75 to 100 years of "room"?
Yes, but that's the limit. Quantum effects mean we'll probably never see a single-atom transistor. Even if we did that's only a few more scalings. The rest of the features won't shrink as much anyway.
We can see the end of the road. AFAIK no one has any clue how to even start thinking about 2nm or beyond. No one is even certain we will hit 5nm.
I understand and agree, but most biological components at a sub-cell scale are membranes and pores in membranes, or even linear or circular structures like RNA/DNA.
Proteins are indeed incredible little machines and for someone with a computer background, they are the most interesting entities in the biological zoo. And proteins may be very large, some are hundreds of nm.
But by seing proteins as components, one misses the transport and signaling aspects which are important inside and outside the cell. It is a problem of scale, if we focus at some scale level, we miss what is going on at a smaller as well at a larger level. I think there is no easy solution to that problem.
I don't follow foundry details closely... what are the practical ramifications for moving to 7nm, and how small can they get before physics gets in the way?
Physics are already getting in the way, that's why we have FinFET, tri-gates, and likely GAAFET (Gate All Around) in the near future. All of these technologies in fab processes are already in high use to combat quantum tunneling and current leakage.
Deep ultraviolet light source: [1] Little box.
Extreme ultraviolet light source: [2] Two floors of equipment.
Nobody really wants to go to EUV with the existing sources. The industry hopes for an EUV source that isn't insanely expensive, incoherent, dim, and an operational headache. But there's nothing better coming along in the near term. Intel and Samsung have chosen to build EUV fabs, to be ready in 2019, maybe. Everybody else is trying hard not to.
[1] http://www.oxxius.com/LUV-series-266nm-280nm-CW-laser [2] http://www.anandtech.com/show/10097/euv-lithography-makes-go...