But then they'll also create a 4070 here. And then a 4070 ti, and 4080 ti, and then.... But that's another complaint. The 4080 is a different problem entirely - not enough names. Somehow they have too many names but also not enough so they can only differentiate via GB.
Their Performance section shows that the 12GB is 50%-150% faster than the 3080 Ti (so better than 3090 Ti!), and it also has more cuda cores and memory than the 3070.
Yea but that's how new tech works. If the price of every new generation of hardware scaled linearily with the increase in performance we'd be needing mortgages for a new PC/GPU.
That's how new tech worked in the Moore's Law era. Times are changing and you're observing the evidence of that.
And yes moore's law notionally died a long time ago, but it's been able to be hidden in various ways... and things are getting past the point where it can be hidden anymore.
Node costs are way way up these days, TSMC is expensive as hell and everyone else is at least 1 node behind. Samsung 5nm is worse than TSMC N7 for instance, and TSMC N7 is 5+ years old at this point.
Samsung 8nm (a 10+ node) used for Ampere was cheap as hell, that's why they did it, same for Turing, 12FFN is a rebranded 16nm non-plus with a larger reticle. People booed the low-cost offering and wanted something on a competitive, performant, efficient node... and now NVIDIA hopped to a very advanced customized N5P node (4N - not to be confused with TSMC N4, it is not N4, it is N5P with a small additional optical shrink etc) and people are back to whining about the cost. If they keep cost and power down by making the die size smaller... people whine they're stonewalling progress / selling small chips at big chip prices.
Finally we get a big chip on a competitive node, this is much closer to the edge of what the tech can deliver, and... people whine about the TDP and the cost.
Sadly, you don't get the competitive node, giant chip, and low prices all at the same time, that's not how the world works. Pick any two.
AMD is making some progress at shuffling the memory controller off to separate dies, which helps, but generally they're probably going to be fairly expensive too... they're launching the top of the stack first which is a similar Turing-style "capture the whales while the inventory of last-gen cards and miner inventory sells through" strategy. That likely implies decently high prices too.
There's nothing that can really be done about TDP either - thermal density is creeping up on these newer nodes too, and when you make a big chip at reasonable clocks the power gets high these days. AMD is expected to come in at >400W TBP on their products too.
I think typically the XX70 has been in line with the xx90 TI of the last generation, so this is in fact a better improvement over that. It's still dumb though to have two xx80's.