Getting so tired of this knee jerk complaint. Do you design standard cell libraries or analog ASIC IPs? If not, why would you care that "1nm" doesn't correspond to an actual dimension in the process? I work with ASIC design in a node where the name still matches the minimum gate length, and the number of times this has actual mattered to me is exactly zero.
What does matter to most people is transistor density and other performance metrics that actually have an impact on the product. The "Xnm" has always given a rough indication of the improvements in transistor density, and all the foundries has done is continue to name the nodes as if that's what matters... which is indeed the case.
TSMC does sometimes call the nodes eg "N7" now. But nobody that matter cares because everyone understands that "7nm" is a marketing term.
Why doesn't the industry want to create a new metric?
Like 2 bil transistors/square cm, let's call it 2 btsc or something. Surely they use something like that internally, why not publicly? Maybe there are different kinds of "transistors", some (CPU) bigger than others (RAM)?
I work in chip design on single digit "nm" processes.
Interally we might say 5nm to refer generally and N5 or N5P to refer to a specific process. When you need more specific numbers you look it up. Exact design parameters are carefully protected trade secrets.
Commenters on Hacker News care a lot more about this distinction than the engineers working with it. Nobody cares that it is not gate length or half pitch.
The number of transistors per square centimeter will vary a lot depending on application. High speed logic is less dense than power efficient logic is less than RAM, for example. I suppose you could use a metric like a minimum size 8T SRAM cell but then you'd have complaints about foundries gaming the system by making process choices to shrink that at the expense of other structures.
Intel kind of tried this. In their presentations when they were struggling with 10nm, they tried focusing on transistor density count. That failed; people remained confused, especially with the (nm)+++ convention for same-node improvements. Intel gave up their old naming convention last year and are now using a marketing naming scheme similar to TSMC/Samsung.
Moreover, density is only one metric of improvement, the other major one is power efficiency. Foundries want to market these improvements too.
A mass misbelief in the continuation of Moore's law into impossible sub-molecular scales. There is no more low-hanging fruit of "just make the optics more precise to make the lithograph smaller"; every process-node step we're taking now is rather a "side-step" involving solving increasingly-complex puzzles to make the same-sized features do more per molecule. It's code golf, at a hardware level. And as in code golf, there's a pretty low ceiling as to how intricately interwoven features can become. You can't eliminate the fundamental need to have all those features in the code (circuit) in some sense.
The actual visible effect of this is an economic one: we've gone from spending linearly more marginal CapEx per fab process-node upgrade, to spending geometrically more marginal CapEx. Each process node under 8nm has been twice as costly to design as the one before it. This is untenable.
The particular measurements of the elements are not interesting, but the transistor densities are meaningful. I would expect a 1nm chip to achieve densities 100x a 10nm chip. That seems unlikely.
It's not a deception, the transistor itself is 1nm, the gate is larger because of physics. We're talking transistor and gates composed of 100~ freaking atoms with transistors being in the dozen or so. And we are complaining that they're using the "wrong metrics." We are nearing Moores limit, might as well rejoice when they bleed out another nm or so and can pack in a few billion more transistors.
The active part of a FET is the area that is under the gate or surrounded by the gate.
It makes no sense to speak about a transistor that would be smaller than the gate, there is no such thing.
Besides the active part, whose conductance is controlled and variable, the transistor includes parts that are either electrical conductors, like the source , the drain and the gate electrode, or electrical insulators.
While those parts may be less important than the active part, they also have a major influence on the transistor characteristics, by introducing various resistances and capacitances in the equivalent schematic.
What matters is always the complete transistor. The only dimensions in a current transistor that are around 1 nm are vertical dimensions, in the direction perpendicular on the semiconductor, e.g. the thickness of the gate insulator.
The 2D semiconductors that are proposed for the TSMC "1 nm" process are substances that have a structure made of 2D sheets of atoms, like graphite, but which are semiconductors, unlike graphite, which is a 2D electrical conductor.
In this case the thickness of the semiconductor can be reduced to single layer of atoms, which is not possible with semiconductors that are 3D crystals, like silicon, because when they no longer form a complete crystal their electrical properties change a lot and they can become conductors or insulators, instead of remaining semiconductors.
There is little doubt that for reducing the transistor dimensions more than it is possible with a 3D semiconductor like silicon, at some point a transition to 2D semiconductors will be necessary. It remains to be seen when that will be possible at an acceptable cost and whether such smaller transistors can improve the overall performance of a device, because making smaller transistors makes sense only when that allows a smaller price, smaller volume or higher performance of a complete product.
> So many people believe that it is the actual gate size
They never state it was the actual gate size in the first place. May be you should blame the media for it?
And just to reply to parents.
> TSMC does sometimes call the nodes eg "N7" now.
It is not sometimes. TSMC has always been very careful and calling it N7 or N10 all the way back to 28nm era. What the media, marketing and other PR decided to call it are entirely different matter. This is especially problematic since ~2015 when they have gotten thousands if not millions times more media coverage.
Language drifts. You might as well be upset that a British pound is longer a pound of silver. Or that people now use "very", "really", and "literally" as intensifiers rather than as claims to absolute truth.
Investors. Even more sophisticated investors can be bamboozled by various claims. Because investors are usually trying to pick the future winners, they need these things as proxies to determine who is actually going to have the faster chips, or the lower costs, etc.
Analysts are often not from the exact line of business. They may have a bachelor's in engineering, or something, but they're usually not domain experts beyond having studied the field from a business perspective and gotten used to its lingo. So, these things can be kind of manipulative. That said, it's par for the course. It's better than outright lying and bribing the analysts like in the 2000-2002 telecom bust.
Colloquially lots of successful technologies have been referred to as "AI" and so it doesn't seem that unreasonable that investors could look into that kind of stuff. On the other hand, if somebody's investing strategy assumes, like, artificial general intelligence is coming soon -- investing involves risk and investing while stupid involves disproportionate risk I guess.
> Since when is it acceptable to trick laypeople, especially when laypeople are buying products with the false advertising (cpus...)?
This is such a HN issue. It's not tricking people because the vast majority of layman don't even care about node size, gate size or the manufacturing process. The number of times I have encountered a person outside of a tech bubble that cared about the manufacturing process of a CPU is exactly zero.
What people do care about: does it work, is it fast, is it efficient and can I afford it. At least for performance and efficiency the marketing term does give a somewhat reliable indication of the generational gains.
Depends on the instruction mix I guess, right? I'm sure you could come up with different benchmarks (think of the silly case, brancy hard-to-predict code on one of those super-long-pipeline Pentiums).
Processors are just fundamentally complicated and boiling them down to a couple number is hard. Any modern one is pretty good and saying anything more than that -- depends on your workload I guess.
Wait -- based on the graphic at that page, should we be taking away that TSMC's and Samsung's shipping 6nm chips have only a very slight transistors/mm^2 advantage over Intel's 10nm chips? And just as importantly, that "XXnm" is nothing more than a marketing tool?!
Hyperbole aside, it's undeniable that TSMC's chips for Apple have a greater computing-power/watt than Intel's chips -- but is that all they bring to the table? I don't mean to disparage power efficiency, it's awesome. But the question stands.
This comment is there under every article that has nm in it. For the millionth time, 1 nm should be read as "1 nm equivalent". The purpose of the naming is to continue to scale with density.