For those not in the know the "4080 12GB" as compared to the 4080 16GB was not just the same card with a little less RAM, as you might assume from the name. It also had ~20% fewer GPU cores and was significantly slower for that reason.
Did they just think it was too good to be a 70 level card? It’s not like they had a 4070 and a 4070Ti and a Super 4070 already and had to figure out another way to market it.
I just don’t see the logic in the naming in any way shape or form.
The 4080 12GB's performance is about the same as where a new 60 series card sits compared to the previous gen 80 series card (60 series tend to be within +-10% of the previous gen 80 series, 4080 12GB is at most a 60ti based on this)
The 4080 16GB is around where a 70 series card tends to sit.
Yes, a youtuber who goes by ‘I’m a mac’ made a great (at least I thought) series of videos breaking down how this generations of die sizes compare to previous generations:
(The first where he starts comparing generations with graphs) https://youtu.be/9_bqwEy5AQ4
Though I have to wonder, do we want bigger 40 gen dies? The 4090 already guzzles at 400w at full tilt, and gets ar least 200+ fps at pretty much all games except maybe cyberpunk, and can complete video editing workloads at twice the speed of the 3090.
Considering the 4090 gets 95% of the performance in games if you set a 60% powertarget [0](~90% in synthetics), I feel like they just juiced the power draw like crazy to be able to say they had the fastest gpu this generation (same with AMD's Ryzen 7000 CPUs, where you can drop from ~240w to 140w for 95% of the multicore/100% of the single core performance).
Also I was playing the Darktide beta, and at 4k I want about double the framerate I currently get on a 6800xt (FSR 1.0 and DLSS (according to friends and comments on reddit+discord) both look bad in it). We're moving past crossgen games at this point too (and UE5 seems very heavy), so for eg. 100+ fps or 4k something in range of ~50% faster than a 3090 would be nice.
My guess is that AI workloads are going to be a significant factor of demand for high-end cards. It used to be gaming and mining and now we have AI art generation that requires high-end cards to be productive.
Now mere mortals can play around with Stable Diffusion and other art-generation technologies, this is going to increase demand for these types of cards.
Folk are already using lower end cards with Stable Diffusion, even the 4GB models.
My 3080 is well capable of generating 512&768 images with a high step count in under a minute, and 500 odd steps (just for fun) in around 2 minutes (or even less with different samplers)
The interesting bit is when we get more VRAM (24-32GB) and can actually train the models with off the shelf hardware without breaking the bank.
And soon, it won't just be the people who are "playing around" with AI models on their home machine. It will be games and commercial products using ML at the client level that will have system requirements around GPUs with a lot of memory for the better results. A lot of this can be done at the cloud level, but if you're clocking heavy hours on cloud GPU, that subscription model is going to be very expensive and will introduce latency if a lot of data needs to be exchanged. Since you can't (by the license agreement on NVidia's drivers) just throw a bunch of consumer-grade hardware in a datacenter to do ML loads like you can for mining, the price shoots up if you want to do the heavy lifting on a central server.
You can get an RTX 3090 (MSRP $1499) for about $900-950 now. (In other words, the prices on these cards really should come down a bit.)
So while the graphics card formerly known as RTX 4080 12GB might perform about the same as the RTX 3090, it's no better value.
The issue isn't the pricing, even though that's an issue.
The issue is what looks like an intentional attempt to confuse consumers by selling two products with roughly the same name, but one of them has 25% greater performance. Some consumers may think "I'm coming from a 6GB card, 12GB is plenty, I don't need to pay more for the 16GB one, I'll get almost the same performance" without knowing the technical details.
Ideally they'll still read/watch reviews and get the best product for their budget, but that doesn't excuse misleading names.
This was me. I was too busy to read/watch anything, 12gb seemed right in the middle of the range, and I blindly pulled the trigger. Later found out I wasn’t quite buying what I thought I was. It’s shitty naming and I’m glad they’re fixing it.
> This was me. I was too busy to read/watch anything, 12gb seemed right in the middle of the range, and I blindly pulled the trigger. Later found out I wasn’t quite buying what I thought I was. It’s shitty naming and I’m glad they’re fixing it.
Are you talking about last gen? AFAIK you can’t actually buy the 4080 12gb yet, right?
> So while the graphics card formerly known as RTX 4080 12GB might perform about the same as the RTX 3090, it's no better value.
So you'd get the same price, just newer chip, newer encoder and DLSS3 support and all that at lower MSRP than what 3090 cost when it was new and launched?
That sounds like same value to me... or even a slightly better deal.
It is a great deal for 3 years ago, but tech is supposed to get cheaper as time progresses. No one says that a 42" 1080p LCD HDTV for $300 is a great deal because they were $800 8 years ago. It is a decent price but for $800 you can 60" 4K display with HDR.
(prices made up, I have no idea how much TVs cost right now)
If you only compare those two cards it might be a better deal, but there's a spectrum of cards, including soon to be released next generation from competitors.
At any rate, $900 is too much! Of course different buyers have different budgets and tolerance for rapidly increased pricing. Two years ago I increased my $220 budget for GPUs to $300. Maybe next year I'll go as high as $350.
You might have missed it with how long the inflated market lasted, but in the last few months with the ethereum PoS transition and the threat of recession keeping gamers at bay they literally haven't been able to sell the 30-series cards for quite a bit under MSRP in what was a very sudden reversal from being above MSRP. Maybe a new series stole the demand but part of the reason they're holding out on lower tier SKUs is the 30-series is not going away fast enough, so there's a good chance that we're back in the 10-series vs 20-series day where there's so much used stock of the old series and a perception that nvidia is gouging to the extent that it hamepers sales of the new ones.
30 series launch MSRPs looked very good until it became clear you couldn't get them for that price, partly because they were having to partially roll back the 20 series price increases, so it wouldn't be impossible to see a repeat.
For the 30 series it was the 3080 in particular which looked fantastic - it had 95% the performance of the 3090 for half the price ($750 vs $1500) and it was a large improvement on the previous generation.
This time around the 4090 is an excellent card, the 4080 16GB is a lot slower and the 4080 12GB a lot slower than the 16GB.
They've also jacked the prices: the 4080 should be $750 and the 4090 should be $1500. Arguably lower again given the coming recession and the market being flooded with the previous generation.
It will be interesting to see what AMD do as they could make life very difficult for nVidia.
Meh, only if you accept that the 4090 is really a Titan or a Quadro under another name. And to be fair, I think that it probably is. But that doesn’t match with the consumer designation that we see in these things and so bargain or not, this is a professional card being marketed to consumers.
If you’re looking for a pro card and have the cooling and power to support it (not to mention, the workflow needs that could benefit), that’s great. If you’re a gamer or enthusiast, the price is still high enough (not to mention the other changes you might need to make to your rig to support the card) that the actual delta between the potential and what you’ll actually do with the card means you should probably just stick with either a 3090 that is now half the price, or hold out for the other 4000 series cards if you must get a next-gen card.
The 4090 can’t actually play Cyberpunk in 4k at max setting and RT without stuttering. People are seeing 22-30FPS walking around.
There is a lot of confusion because some people assume turning on DLSS increases settings but it actually lowers quality. Sure DLSS is good enough that most don’t notice, but you can say that about lowering most settings slightly to improve performance.
Not really. Like yes, enabling DLSS lowers the real rendering resolution, but it looks better than native in pretty much all cases, minus some motion artifacts. You'd be silly not to use DLSS in a game that supports it. On a 4k screen anyway, it's not quite as great at lower target resolutions.
Recently, unless they changed something in the last week DLSS 2 is IMO not worth it to play in 4k vs 1440.
DLSS 3 still only looks fine on cherry picked screens: “It looks like DLSS 3's weakness lies in hidden geometry, where information is missing between two frames due to geometry overlapping another set of geometry while in motion. This can cause DLSS 3 to "shutter" and output ugly artifacts as it tries to fill in the void of missing detail.” https://www.tomshardware.com/news/dlss-3-early-review-rtx-40...
This is incorrect. If you want to play e.g. Cyberpunk at 4k maxed out with decent framerates the 4090 is the only card that gets you there. Especially once you start looking at meaningful numbers like 1% lows, even the 3090 is struggling to break the low 30s.
It doesn’t apply to every game or every resolution, but there are actual game scenarios where a 4090 makes sense. Price and heat and power and space not withstanding, it’s a meaningful upgrade.
Dangit, I do want to play Cyberpunk maxed in 4k. Sorry kids, I guess you'll have to take a gap year before you can go to college, daddy's gonna upgrade.
I will add that, IMO, if it's possible for any gpu to play a just-released game at maximum settings at a high resolution at 60+ fps, the developers haven't set the maximum quality settings high enough. Leave some headroom for future GPUs, or allow players who e.g. prefer a ludicrously high draw distance at any cost to make that choice and dial down the resolution to compensate.
Most games don't meet this bar, and I think gamers who expect to be able to set every slider to the max and go to town—yes, even on a $2K GPU—are mostly responsible.
I disagree that developers should spend valuable engineering time on producing games that can’t be run. Spend that time making other games instead, or squashing bugs (looking at you, cyberpunk!) and maybe keep a backlog of future features to patch in when the hardware gets there.
As an actual gamedev - no real extra engineering time is spent on this - during development all textures and assets are usually produced in 8K or higher anyway, it's only during the data packaging step that everything gets downscaled to whatever was selected by tech art. There's a reason why my workstation has 256GB of ram - just loading up the main world of the game easily takes 100-150GB because everything is in such extreme quality. Same applies to visual effects - usually in development you will run all effects at full resolution to test how it all works.
Releasing an "extreme" preset that no consumer PC can really run yet is just a matter of leaving those higher-quality assets in + enabling those full resolution options for VFX. Allowing users to use ray tracing at full resolution(something which will kill any GPU on the market) requires just adding an extra config option - there's very little engineering time required. QC will need to do a full pass of these options, but in the grand scheme of things tested that's not a huge deal.
I mean, it can - but that's also a solved problem. Look at far Cry 6 - you want HD textures that require at least 12GB of vram? No problem, it's just an optional download.
It’s actually cheaper to build a game with higher graphics settings and then let people tweak the settings vs trying to optimize a game for hardware you can’t actually test with in development. On top of this you can release some awesome screenshots from settings that aren’t actually playable at release.
This is especially beneficial when deadlines slip and suddenly consumers have a lot better hardware than you where expecting.
I agree! When it comes to graphics, just make them look great with current (and not uncommon) hardware. When most gamers have upgraded hardware capable of substantial differences games can keep selling remasters if people care.
I'm fine with nearly all gamers getting a better experience even if that means the tiny fraction of gamers who can and are willing to spend insane amounts of money on the best of all possible video cards are not able to take full advantage of their crazy hardware in most games.
There is a limited set of knobs a developer can add without increasing their development costs. If you ship a set of "ultra mega extreme" textures that will only be usable with future hardware you are still bloating the download by many gigabytes, probably dozens or even hundreds. If your dev team says they can make even better shadows but not on today's hardware then is it really worth the development effort to create them now? You can multiply particle effects to crazy amounts but that ends up looking silly.
To be clear, I'm just asking for extreme draw distances, higher resolution shadows, and full quality textures. If that would require significant engineering, I stand corrected! I can certainly see how install size would be a concern with textures.
I find it difficult to return to games such as Assassin's Creed II because of their muddy textures and low draw distances. These issues feel like something that could have been avoided with just a tad more forward thinking!
There are also games like Quantum Break which (at least at launch, not sure if it was ever fixed) included mandatory upsampling which the user couldn't disable. The reason given was that the game wasn't designed to be playable at native resolution, and no current hardware would be able to run it that way.
Extreme draw distances:
large open-world games or those with very long, content-intensive environments need to resort to tricks to unload parts of the world that are not visible or quickly accessible. Extreme draw distance can mean keeping orderS of maginitude more objects resident, which could mean a lot more materials loaded, more draw calls, more VRAM usage, or more swap.
Higher resolution shadows:
Hard shadows tend to look bad, soft shadows tend to perform bad, and worse with more lights. It takes a lot of deep GPU knowledge to do these in a visually convincing and high quality manner. The difference between "good enough" and "perfect" will easily cost you double digit fps at a minimum.
Full quality textures:
As with the draw distance caveat, implementing LODs is rather work-intensive. Some people will tell you that you can automate it, and they're half-right. If you are looking for top-notch game quality, that absolutely does not cut it, but if you're not trying to go the extra mile it can be serviceable.
Games are super inconsistent in how far they push technology vs push the art, but there is rarely a "turn the dial to 11" knob ready to turn. The production requirements and technical limitations mix in unpredictable ways.
Other times, games push ridiculously far in certain directions that later become mainstream, and execute well enough that, after they are copied into other mainstream games, they feel deficient - not in spite of their success, but as a direct result of it!!
> Extreme draw distance can mean keeping orderS of maginitude more objects resident, which could mean a lot more materials loaded, more draw calls, more VRAM usage, or more swap.
The point is that all of this merely requires more resources at runtime, not any additional work on behalf of the developer. So, by allowing limits higher than is practical on hardware at the time of release, the game can scale somewhat better on future hardware. What's the downside?
High-res textures are a different thing, since they actually have to be painted. Or upscaled, I suppose, but that's still code somebody has to write.
> High-res textures are a different thing, since they actually have to be painted.
Ah, I want to clarify again—I was imagining the developers already had higher quality original textures, which they had downscaled for release. The textures in Assassin's Creed II, for instance, have the look of images that were saved down to a lower resolution from larger originals. But I could be wrong, or even if I'm not, it might be less common nowadays.
As you say, the goal is to include things that only require computational resources at runtime (even an order of magnitude more).
Look, if you want to pay $1600 to play one game at 4K on max settings, be my guest. I'm fine with it in 1440p or in 4K at not max settings, but you do you.
PC vs console has never been a value debate; if one wishes to merely play games at acceptable-for-most quality, console beats the pants off PC all day every day. PC gaming was historically and still is in some cases about living on the edge of available consumer tech; a Vive Pro 2 is 2448 × 2448 @ 120hz /per eye/, PSVR is 960 x 1,080 90-120hz, but for getting into VR it's like a quarter (or less) of the cost of a top-tier VR headset and the PC that can run it. Is that PC experience 4x more valuable than the PSVR experience? Probably not for the vast majority of people, but they still sell enough Ferraris to make it a viable business when a Toyota Camry will do the same job of transportation.
I wasn’t alluding to PC vs console here — I was literally talking about me, someone with a 3080, who has been playing Cyberpunk on it for 18 months — happily, I might add. I understand that there is a strain of enthusiasts who want to push everything to the extreme, and I understand the appeal to that audience. I’m frequently part of that audience (my desire for a white graphics card outweighed me caring if I was playing CP2077 at above 1440p or not, depending on what frame rates I want) - though I have an XSX and a PS5 as well - but $1600 for one game, right now, isn’t something that compels me. Especially since I’d need to re-evaluate my PSU situation, which might impact my case, which might impact cooling and so forth. Talking to my other enthusiast friends, I know I’m not alone.
This calculation is actually more complicated and doesn't trivially resolve to a singular number. Comparing the two devices across the board only works if its a dedicated computer used exclusively for gaming.
Many buying a Gaming PC will be buying a device that will do double duty and the price differential is best described by the GPU required for work vs play and if we want to truly compare it to console quality we ought to be comparing it to a budget GPU similar in performance to a PS5. For example a $370 nvidia gtx 3060 vs a $500 console.
From time to time I am reminded of the corners cut on game consoles. My Xbox One S plays DVDs but it doesn’t play CDs. It doesn’t play HEVC. I really wish I could have just one box next to the TV that satisfies all media needs.
This is a fundamental shift from years past where perf per dollar scaled up every generation. Now they are trying to scale performance while also scaling up price. Only time will tell if the demand is inelastic or not.
Especially considering the whole 30 series had inflated MSRPs due the ridiculous demand gpu mining created which lead to GPU shortages for regular consumers.
But now they have this bin of chips designated as the low tier 4080 that they previously said was worth, what 1000 dollars? Do they call it a 4070 and just instantly sell it for 600 dollars? Really showsbus that the value behind their cards are whatever Jensen and friends want it to be, actual costs be damned.
Yes, good, you're getting there. Now... put them together...
Inflation is driven by consumers having more money. This means that more can be charged for the same products and the consumers will begrudgingly pay. This means more will be charged.
This is why giving ~50% of US net-taxpayers free cash caused inflation to spike.
Some reddit post looked at the % differences in core count and clock speed relative to each generation. It most closely fit in the spot a 4060Ti would fit in based its specs relative to the actual 4080.
they got greedy and every reviewer in existence called them out on it. The price is too high for a 4070 so they called it another 4080 no matter how different it was from the other 4080.
It probably will be. A 4075, 4080-lite, 4070 super or similar. I don't think their pride and target retail price could stomach calling it the 4070 it clearly is.
in 2018 the 2080 launched at $800 and was seen as extremely overpriced, such that the entire generation of cards sold poorly. There's no reason to cherry pick data points unrelated to current date because you can remember chiselling out your circuits out of stone tablets in 1997. it just isn't relevant.
4080 12GB was universally panned. The 40 series launch also got heat for price gouging, particularly the higher cost for the low end of the launch (4080 12GB). They had to raise the cost of the lower end of the 40 series though if they wanted to maintain the value of the 30 series cards and clear out the remaining inventory. They couldn't just release a true 4070 for a true 4070 price. While the name was obviously bad, it seems likely that they wanted to obscure the release of a 4070-quality chip for a 4080-price while attempting to sell off remaining 30 series. Pure speculation: maybe they were hoping a "cheaper 4080" would come across to the uninformed as Nvidia trying to lower the entry cost for 40 series rather than raising it through an expensive 4070.
Two potential reasons for the rollback come to mind:
1) higher than expected 4090 demand means they can wait to launch a 4070.
2) higher than expected heat for the thinly veiled 4070 price gouging made it worth it to wait on the release since it helps sell more 30 series cards by raising the entry price for a 40 series while getting better PR in the process.
It's actually even worse. If you look at the core counts, the 4080 12g is a 60 tier card, and the 4080 16gb is a 70 tier card. The 4090 has a much better power to cost ratio.
Has anyone done analysis on this? My layman's assumption is that with the shortages and gouging/scalping over the past two years, an awful lot of people decided to tough it out on their 10-, 16-, and 20- series cards, and now the narrative is that the shortages are over (whether or not the actual prices really back that up) and those people who skipped a generation or two are now emotionally and financially prepared to "treat" themselves to the new top of the line.
If this is it, though, it seems weird that it could really have caught Nvidia by surprise. Don't they have driver-level telemetry that would show them all those older cards plugged into new-chipset motherboards, and could give them some indication of demand?
Plenty of people do have the money to spend on these cards. It's entirely possible that it's really just a vocal minority that refuses to pay these prices. I agree with the grandparent and the 4090 probably sells better than expected. The card performs well too.
We are in an economic recession, so even if the people have the money, many are not willing to spend it on a graphic card. If you also consider parts of the world like Europe where the price of electricity more than doubled and the power consumption of 4xxx series (practically secondary room heaters), there are even less people here willing to pay the price.
> If you also consider parts of the world like Europe where the price of electricity more than doubled and the power consumption of 4xxx series (practically secondary room heaters)
Considering the worries about heating in the winter this year in some European countries, marketing the 4xxx as a secondary heater might actually be a good idea ...
That's what you think and expect, but it might not be what is happening. The 40xx series is already priced above a point where people that don't have the necessary disposable income can afford a 40xx. I doubt the electricity prices affect hobby and professional users of these cards all that much.
Benchmarks I have seen absolutely put them above existing workstation cards in everything except memory. If your model and embeddings fit into 24gb vram, it absolutely makes sense to buy this over an a5500 or even a a6000
That’s me. I spent 4 years with the last gen and I don’t feel bad about spending $1600 this time. I actually feel lucky that I kind of skipped over the whole shortage.
Absolutely, and that's pretty serious coin even for us wealthy tech workers! You could buy both next gen consoles for the price of a single component in your computer.
Sure, if the card costs $1600. But for most mere mortals, their GPU is built into the CPU, soldered onto a laptop motherboard, or if truly discrete, is at most a quarter of the total BOM.
I was shocked to learn today that B650 boards are available. That information didn't seem to make it anywhere near my usual technology news channels!
But... they start at $170 for a barebones motherboard. Having spent $200 not too long ago for a well-rounded mid-range X570 board, I find $170 for the starting line up quite steep. And it's unlikely builders want to pair their $300+ Zen 4 chips with the most basic board available.
The barebones right now would be $170 + $300 + $90 (16GB DDR5) = $560 before accounting for the rest of the parts (like a GPU).
Yup, doubling the memory bandwidth, doubling the memory channels, and doubling the PCIe bandwidth, and switching to DDR5 is placing a premium on the new AM5 platform for AMD. Similar happened with the Alder lake launch, which had the same upgrades and combined with sky high DDR5 memory prices.
Just wait a few months, pioneers are the ones that get the arrows (high prices) in the back side.
I'm waiting until around March/April... hoping that prices settle by then, also considering rDNA3 and hoping to see an R9 7950X3D model by then before making final decisions on a next build. Also, right now there's not really any good options for higher speed DDR5 at higher quantities and am curious to see which boards support ecc by then.
It seems like it has been bought out by scalpers after a few days of plentiful stocks in the EU. We'll see if they will prosper or if they keep listing them on eBay for a long time...
I always expected the 4090 to sell like hotcakes. 4080 16gb I was more sceptical of but it still seemed like a nice card for 1440p gaming considering the 4090 is just overkill at that resolution.
It was the 4080 12gb I was expecting to flop hard. 4080 12gb was way too close to the 3080 going for jack squat so there was hardly a market for it.
It's not just that the 12 GB 4080 was a confusing name - it wasn't the same class of card, at all.
In previous generations when they've had differing memory sizes for the same card, that was the ONLY change. So, it was useful for something like CUDA, but usually not for gaming. A specific audience.
For the 4080, the 12GB version has the following changes:
* 12GB VRAM vs. 16GB VRAM (the obvious one from the name)
* 7,680 CUDA cores vs. 9,728 CUDA cores for the 16 GB.
* 192-bit memory bus vs. 256-bit memory bus (understandable, since this scales with memory size... but also probably means the memory itself is slower).
This isn't just a different amount of memory, it is fundamentally a different product and should be marketed as such. Instead it's Nvidia being greedy.
That's not quite true. They've pulled this stunt a few times. Most recently with the 6GB and 3GB versions of the 1060. That was arguably even worse because they did it after the launch.
The 1060 didn't really have significant differences in performance, as far as I can tell. Still bad to do it, but GPU benchmarks appear to put performance hit at ~3% less for the 3GB version.
Looks like there's also a 5GB version which has slower clocks... which is about 10% worse... but I assume that's mainly due to clock rate, not the memory or the actual silicon.
Those all have the same amount of CUDA cores and processing pipeline, though. Unlike this '4080 12GB'.
Not true at all - it seemed like a great deal at the time but aged very poorly. You could run a 980 today and still be doing great. Everyone I know with a 970 is struggling
At the time you would have been much better off going for a 780 as they had some bargains - especially on the more rare 780 with lots of ram
I was running a heavily overclocked 970 until about 7 months ago when I upgraded to an ultrawide monitor. Before that, it was more than happy playing the games I played at 2560p at only a few fps slower than my partner’s 980. For the price it cost me, it was an excellent card, and i paid more than triple adjusted for inflation for the 3070 that replaced it.
I still have a 970 and it's doing fine. 60 FPS 4K Rocket League. I dunno what games and settings you guys are using but I'm sure it could handle something more demanding if I just turn the settings down.
I seriously enjoyed mine and didn't understand what the fuzz was about. Sure - it might have had less memory than people expected - but in the end the game performance for the price point mattered for me. And that was great. I got a card which delivered great performance at that point in time (certainly similar to what a 4080 12GB is for todays generation of games) for less than 350€.
The GTX970 is a fun bit of hardware. It's marketed as a 4GB RAM card.
In practice, it's 3.5GB of normal GDDR4, and 512 MB of horribly slow, would have been considered bad in 2000 RAM. So, people who bought it thinking they're getting a less powerful GTX980 get more inferior product than they bought. The last 512Mb is truly worthless, only good for storing a desktop framebuffer. Anything that needs to write into it quickly (like, say, literally any game) will just slow down to a crawl.
I owned the 970 for 7+ years and I remember all about that bruahaha.
I also remember that the card was incredibly reliable and hasn't shown those promised performance dropoffs of "useless last 512MB" of VRAM and still performs well even for some 1440p workloads to this day.
It's been one of the best hardware purchases I've made over the years, which is why I'm also very sceptical of this screaming about the 12GB 4080 model. None of us who bought the GTX970 got "more inferior product" that what we thought - we got exactly what we've seen in benchmarks.
And I bet it would be the same with buyers of those 4080s - they'd get a cheaper, slightly slower model and it would perform just like the reviews and benchmarks promised. The whole drama is just about the... number on the box?
I am still owning a GTX970, one of the fans has failed (but then that's just aging), and the last 512MB is awful. Main memory operates at 192Gbps,last block at 28Gbps. It is easily felt, and most GTX970 owners explicitly tried to not use that memory, especially on games that swap assets often (open world games are deadly)
The last 512MB bit was mentioned nowhere in benchmarks or marketing material, until people realised something was wrong and nvidia said "teehee woopsies"
I'd love to hear any kind of citation about this "avoiding of last 512MB", because after the initial screaming, the issue kinda disappeared and pretty much every open-world game I threw at it ran just fine on High @ 1080p (or even 1440p in many cases).
Sure, the numbers in benchmarks do drop off, but what was the actual effect when you ran an actual game on it? I never had to "avoid" anything and haven't really seen any notes about doing that outside the benchmark readers.
What a strange post - the name is not right, so it's unlaunched. No indication of a new name? Am I supposed to conclude that the whole product is canceled for now? If that's the case it seems unlikely that the naming error is the whole reason.
ABIs will have already made tons of cards, boxed up and ready to ship out, just contractually unable to sell them on-masse quite yet. And now they really can't sell them until they've reprogrammed and reboxed them to show up as (almost certainly) 4070 instead of 4080 because that's essentially what they are. The 4070 wasn't missing from the lineup: the GPU chip on the 4080 12gb model is literally a completely different chip from the one on the 4080 16gb.
(Think of it like Intel calling the 14th gen i5 "an i9". Or heck, Exor deciding to label a Fiat sports car "a Ferrari 812 V4" while keeping the real thing "a Ferrari 812 GTS").
And of course, that's an gigantic dick move because it costs NVidia nothing to announce this, but probably means no one will even be able to make a profit on the 4080 12GB cards they already made (which a cynic might say was precisely NVIDIA's goal here, as plausibly deniable punishment dished out to the remaining ABIs for daring to let one of their own disobey the corporate overlord).
If EVGA hadn't already broken their partnership, this definitely would have make them do so. "Thank you for jumping through our hoops, after paying for the entire course yourself, now make a new course because you displeased your master" is not a great way to treat your partners.
Curious about the downvote. The EOM market is a very different beast, in that OEM market doesn't suffer from the "reviewer peer pressure". If a million youtube reviewers go "this 4080 is a 4070, wtf??" including people like "Steve from Gamer's Nexus", Jason Langevin, and Linus Sebastian, you change your tune (even if you don't admit that's why) because consumers pay attention to this people. But in the OEM market, that pressure literally doesn't exist. Intel can call things whatever they like on the OEM side, but unless that affects the enthusiast OEM market (which is, let's be real: extremely small) no one is ever going to call it out, let alone care.
If true, it reminds me of the reputation that Jack Tramiel, founder of Commodore computers, had in the 80s of screwing over his suppliers to the point they would no longer do business with him.
Do we know that for sure? It's possible that the fake 4080 was for yield reasons but the yields are higher than anticipated. May as well sell the same chips for more money and "unlaunch" the shitty product.
Yes, back in August warehouses already has stocks of RTX 40xx cards.
This "reverse" has nothing to do with yields, nVidia (rightfully) realized it's better to avoid a shitstorm because this 4080 12GB version is ~30% slower than the "real" 4080.
It's hard to trace logic through that. Demand for the $1600 RTX 4090 is from "money be damned, give me frames" consumers, mixed with "can use for ML" professionals, and as such, are not quite the same market segment as the $900 crowd (remembering that the xx80 cards used to be $700 and much closer to the top of the lineup in overall performance.)
It's a different die than the 4090 (and the 4080 16gb for that matter), they already had a lot of them made, but may redirect future production to more 4090 and 4080 12gb while having the mfgs rebadge the 4080 12gb to say 4070 and just sit on them until reboxing/rebadging.
I think EVGA was probably unique in that the owner was passionate about actually providing a damn good product and service and at some point just got tired of it - hard to blame them. Running a business is stressful and if you have to become mediocre after striving to be the best it must just seem pointless.
I think you are speculating from a sentence that I've seen that "AIBs won't know MSRP until announcement" into "AIBs don't know any details until public release."
AIBs need to design assets, decide what components they are going to use, design cooling, test that, sign contracts to make said cards, test the cards that have been made, and then ship them to customers.
You truly believe that every AIB knew "near nothing" about what they were building until public release? And then miraculously do all of that in a compressed time window? Source?
I doubt any partners had designs finalized, given the rumors of how little time they are given to do that before each launch (which is a problem in its own right, and one of the things they are know to complain about).
Never forget that the inflation-adjusted launch price of the GTX 1080 is $740 and don't you dare let somebody tell you $900 or $1200 is "just inflation adjustment".
At the time, this was widely derided as a "fake MSRP" because there were no cards available at the official MSRP and said the $700 FE was the real MSRP.
(even Zotac and EVGA tossed another $20-40 on the MSRP and those specific models were unobtanium for almost a year.)
$725-750 was considered a good deal for an aftermarket 1080 at launch, if you could even find one. Sometimes they went up to $800 or higher from scalping/demand.
If we take that $700 figure in June 2016... the launch price of a GTX 1080 is equivalent to $862 today. And those prices go even higher ($985) with launch scalping.
As a meta-commentary... it's funny how people shift back and forth between the numbers that support their case, like at the time people were super mad about this whole fake-MSRP business, and now Pascal MSRPs are cited unironically as comparisons for the current stuff... but you couldn't actually go buy anything for MSRP for Pascal for at least six months.
I knew someone on another forum who was so upset about the whole thing that they called it the "holodomor launch"... cause being forced to use a GTX 970 instead of a GTX 1080 is just like when millions of people starve to death. People have buried the memory but the enthusiast segment was not happy about the Pascal launch either, at all, but today it's held up as a model launch with model pricing.
At one point there were major shortages and difficulty finding GPUs. It was kind of insane in that environment not to raise prices, and hand the fruits of their work to scalpers/middle men.
If people don’t want to pay inflated prices they should just not, and ultimately the problem should solve itself.
I suspect they're trying to maximize profits before the 25% GPU tariff comes back. Then they can get good PR for keeping their GPUs at the same overly inflated price
Not the first time that Nvidia has released completely different GPUs under the same name. Around 2006, they had 2 existing variants of the 8800GTS(320mb and 640Mb VRAM). A year later, they launched a new 8800GTS with a newer GPU and 512Mb. The newer card was much faster than the both older versions of the card. I can only imagine that this caused lots of confusion for uninformed consumers who might think 640 > 512 > 320.
This, Nvidia competeing against their partners with Nvidia branded boards, and the evga article a few weeks ago make it seem like being in partnership with them must be dreadful.
These card makers now have to sit on inventory and reprint boxes and repackage everything?
Apple dropped Nvidia after Nvidia chips were failing en-masse in the late 2000s and early 2010s, and Nvidia tried to publicly pin the blame on Apple, even though Nvidia chips in laptops by other manufacturers were also failing.
Semiaccurate isn't a reliable/unbiased source at all and shouldn't be cited as a serious source. They're on the level of the UserBenchmarks guy as being sometimes right, but always incredibly hyperbolic and emotionally attached, to the extent that it affects their work.
RoHS solder failing was an endemic problem from that era and affected all products from all brands... for example, there used to be lots of reddit posts about people baking their Radeon 7850 GPUs too.
Apple is just an 800 pound gorilla in every relationship they enter, and they like it that way, and they can use it to try and recover unexpected costs. A lawsuit of opportunity - there would have been no point in anyone suing the near-bankrupt AMD at that point, but NVIDIA had money and it was worth trying to pump them for cash even if it was an industry-wide problem... NVIDIA is part of the industry, right?
The 4080 12gb release date hasn't been announced yet so it's likely that no boxes were printed or packed.
In the long run this is a good move considering how idiotic it is to give the same name to products with different dies and a ~30% performance difference.
You're talking about supply lines across an ocean... that's a lead time of several months... not to mention the time it takes for making injection molds, etc. these cards were already made and badged... They may just be sitting on pallets waiting to be boxed, or may already be in boxes and/or shipped.
At the very least there's probably some recall operations to ship back, take off the shrouds and put on new shrouds for the rebadge, if not also rebox.
Sure, but has anyone started mass production of 4080-12GB cards? Maybe there were no plans to launch it this year. I bet the PCBs have not been attached to the coolers and shrouds yet. If so, they can use the "4080" badged shrouds for 4080-16GB cards and make new shrouds for a late release of the 12GB version.
I'm pretty sure with stock selling in stores in a month, that there's already cards fully assembled, boxed and shipped to US warehouses, or at least on ships headed to the US.
edit: Also, have you ever taken a video card apart and re-assembled it? Even if the components/shroud are reusable for other 3080 (and likely are) this is a lot of labor cost above and beyond any practical waste.
More than that is the injection molded shrouds and backplates on many of the cards in question, not to mention reboxing and recalling existing/shipped inventory.
Most of them just say "GeForce" or "NVIDIA", most brands didn't mold numbers into the shrouds/backplates (because until the launch announcement, they have no idea what the SKU placement for any particular chip will be in the first place).
It's a mess but no need to make it out to be worse than it is, GN already confirmed that NVIDIA is compensating partners etc.
History... the GTX 1070 was a pinch better than the previous generation GTX 980 Ti; cost ~$429 (MSRP $379). The GTX 1080 was maybe 20-25% faster still, ~$699 (MSRP $599).
The RTX 2080 was launched as the top card (September 2018), with the Ti and Super coming later.
It wasn't until last generation (September 2020) where Nvidia introduced the x090 naming, and with an eye-watering $1499 price. The initial 10GB RTX 3080 had an MSRP of $699.
The 3090 heralds the return of “extreme” SKUs in the line up that was introduced with the 490 until the 690. There was no 790, but I would consider the Titan/Titan Z SKUs to have picked up that segment.
You make the assumption this unlaunch was Nvidia's idea rather than the AIBs approaching Nvidia and informing them the mislabeled GPUs won't sell well.
The "4080 12GB" name was misleading almost to the point of fraud because it led customers to think it would have the same performance as a 4080. I don't object to the product itself, just the name. Removing it from the market is the right move. Maybe they'll relaunch it with a non-misleading name.
In the past, NVIDIA launched the "3080" and the "3080 (12GB)" where the memory capacity and bus width was the major differentiator. (it also had like 3% more cores or something).
The RTX "4080 12GB" and "4080 16GB" were much further apart in cores (IIRC a 30% difference) and so naming them in the same category in a way that suggests the RAM is the primary difference, was widely seen as disingenuous.
Yes technically any consumer could lookup the specs, but that doesn't still make it a dirty and dishonest move.
4080 12GB did its task. It send the gamers a message that do not wait for cheaper 40 series cards go by 30 series now. Now the best thing Nvidia can do is wait until navi 3 has launched and release 4070 ti with same specs but decide price point based on what AMD has set to their cards.
> It send the gamers a message that do not wait for cheaper 40 series cards go by 30 series now
Jensen mentioned this with the Q2 earning statements [1][2], that pricing would be set to sell old (30 series) inventory:
> And so our first strategy is to reduce sell-in in the next couple of quarters to correct channel inventory. We've also instituted programs to price-position our current products to prepare for next-generation products.
> ... we've implemented programs with our partners to price-position the products in the channel in preparation for our next generation.
Indeed, apparently Nvidia misjudged the GPU forecast and the crash of crypto mining on GPUs and ended up with a ton of stock. Apparently AMD was more accurate, so here's hoping AMD beats Nvidia to the punch at the $300, $400, and $600 price points.
Weird post seemingly written by an intern during lunch. The pictures of "lines" scream desperate, "see! people want our cards!". I think nvidia is in deep trouble.
That's what stuck out to me as well. This legitimately reads like it's by someone who's never written marketing copy. And what the heck is with the last picture of the box buckled into a desk chair?
> And what the heck is with the last picture of the box buckled into a desk chair?
The blog post reads like an Intern who is/was a Redditor wrote it. The "GPU buckled into a seatbelt" is an old-but-common PC Builder Meme/Tradition online (particularly Reddit).
Oh, but what an amazing desk chair could be! Heated seat! Motorized recline, height, position adjustments! Seatbelts for those intense coding sessions (or, more realistically, desk chair races). Hell, managers will love them too as they already have the butt-in-seat sensors to know if their underlings are "working"
A funny point is that the line depicted in the second picture down (Micro Center - Burlington) is not exactly unique to an NVIDIA launch. Micro Center often has long lines for a variety of manufactures new parts. That place is basically the biggest outlet in the area for the DIY crowd.
I'm sure there's some justification about why it's 50% increase in price, but if it's a necessary increase then even releasing just seems tone deaf given the state of the world right now.
2080 is when they introduced RT and the card performed in raster gfx about the same as the 1080. And support for RT was coming in the future so it kind of makes sense it was priced the same.
Agreed. This post is really weird, especially coming from a multi-billion company. Perhaps I'm too used to corpo-speak, but this coming from the opposite end does feel kinda fishy.
From the company that brought us 970 4gb which had 3.5gb.
Thing is, they know there's plenty of suckers out there and will absolutely not call, not to mention price, it as 4060 which it clearly is.
Help us, Lisa, you're our only hope. Not that AMD didn't overprice the AM5 platform, too. The only way is to resist. Just wait it out if you can help it
>Not that AMD didn't overprice the AM5 platform, too.
The B chipset is cheap, and AMD mentioned the possibility of $100 boards.
Some boards in the market are actually <$150, yet most motherboard vendors went with outrageous price labels (!), even above $300. I doubt they're "twice as good" as the $150 boards. Prices should become more reasonable as time passes.
I've never been one to buy expensive motherboards. Do need a million billion phases, sata and USB devices. Most expensive is my current x570 board for one of my 3700x builds. Total overkill. Not complaining, just saying.
Hope you're right, thought. I would be interested in a 150 euro mini-itx board hosting one of the upcoming desktop APUs
I was expecting the article to announce it was being renamed to a 4070 or something. Now I'm just confused. It's just going away? Did they already manufacture these?
Nvidia caught with their pants down expecting consumers to be stupid and not knowing their 4080
12gig card was just a 4070ti wearing a makeup. They 100% knew it, definitely was NOT a mistake. Just tried to sell a lower spec card masquerading as a better card to your avg customers who would not know any better.
Tons of reviews saying $900 for the 4080 12GB is insane. The 4080 12GB is after all a 3060 update with 192 bit wide memory interface. Even the 3060 Ti has a 256 bit wide memory interface.
Definitely had me decide to wait to see how AMD does in Nov.
Not going to agree on that one, the message was generally Nvidia is screwing gamers, but ANYTHING else.
Much like the general news of AMD server chips, tons of press on Intel's shrinking Xeon marketshare. Sure people are talking about Intel, as a lesson on what not to do.
This is sort of true in this case. People who don't follow GPUs closely, like me, now know that the 4090 is an appealing card and the remaining 4080 deserves the name.
Same -- I can appreciate there maybe confusion but this was done by nVidia before. There is a precedent -- though I guess with marketing the customer is always ultimately right.
Still can't get over the power draw for the 40 series. They recommend an 850W PSU for the 4090? A 750W PSU for the 4080? Who is the intended audience for these cards?
If you reduce the power target to 60% you still get 90% of the performance[0]. There's no reason for them to push the card this hard unless they were scared of losing to AMD.
I was thinking to myself the other week "It's only a few dollars more for the 16GB, why does the 12GB even exist?".
Was under the impression that perhaps they had changed direction late in the game and had to offload those other cards.. Or maybe they lacked the political will to "unlaunch" it earlier.
What is going on over there? First the confusing move to release two 4080s, now a confusing press release about unreleasing one of them, that itself reads like somebody released it prematurely?
No way they did this on their own. Retailers or partners must have pushed back on it because they didn't want to deal with upset customers and constant returns, or scams.
That 12GB 4080 had compute bit above 3090 ti.
But memory bandwidth between 3070 and 3070 ti.
If it the cache hit rate is similar to Navi2 of similar size. It would have effective bandwidth similar to 3090 in 1440p and way above any previous gen card in 1080p but at 4k it would be around 3070 ti territory in terms of memory bandwidth.
So a great 1440p card natively and to those who are willing to use DLSS to upsample from that to higher resolutions.
What had me worried is that the 4080 12GB has a 192 bit wide memory interface, same width as the RTX 3060 and less than the RTX 3060 Ti.
Sure they worked on the caches to improve performance, but I always worries that some games will do poorly with the caches and have terrible performance. After all it's the lowest frame rates that are most noticeable, not the max or average.
Charging $900 for a RTX 3060 memory width is insane.
I will get down voted to oblivion but if you consider yourself a x080 customer and don't want less silicon in a 4080 12GB or the price hike for the 16GB. You have a very viable option of just keeping your current x080 for free.
The 4090 is extremely good value for an extremely small group of people wanting ML or perhaps 8k gaming. Cards that will appeal more then a very select few will be released latter.
Hmm it looks like my idea of buying an AMD G series CPU with integrated graphics so I can wait out the current video card market was a stroke of genius.
Based on the pricing, they unlaunched the wrong one. The price of the 16GB 4080 is way higher than the 3080. Based on pricing alone, it's the 12GB card that should be called the 4080 and the 16GB card called a 4080Ti or Super or 4085 or something.
So hopefully this is a sign that they're going to adjust the pricing of the 16GB 4080 to the price announced for the 12GB former-4080.
What I bet will happen is: 4080 16G is going to be a very fast card that's going to sell as hot cakes at set price point. Just like the 4090 is selling very well despite all the moaning about power and price.
In this case, I'm not sure that hope is really even worth it. This whole launch has been blatantly artificially inflated to move more of the backstock on the 30 series.
I'm leaning that way myself... will see where it lands... if it can get RTX performance close to 3090 or so will probably go that direction... I'm not giving up 4 slots for a video card, even if the boards just have m.2 there, it just feels so wrong.
As a Linux gamer this isn't even a question for me since AMD is so much better on Linux. But I'm sure they'll beat Nvidia in raw performance too. They already did that with 6000 vs 3000 series.
What the hell is going on at nvidia? First the very transparent not-a-4070 scam, then gigantic misjudgement on crypto demand drop, then the evga mess, then this "unlaunch" that looks like an intern post.
The tech still seems good (e.g. DLSS) but corporate decision making seems in freefall
Although there have been a lot of negative reviews regarding the value of the 12gb 4080. I wonder if it could be more of a response to what AMD is planning to release shortly? I don't recall Nvidia backtracking in the past because of bad press.
This is the era of the $1000 phone and the $100 laptop. Running a decent metaverse needs something comparable to an NVidia 1060 6GB from 2016. That cost about $250 back then, and it still costs about $250 used. NVidia has nothing in their current product line at that price point.[1] This is a big problem. There ought to be something with NVidia 1060 graphics power and memory around $100. But no. NVidia got spoiled selling to the Etherium mining business at a huge markup.
The Metaverse may have to wait until the new generation of Chinese GPUs ships.
Those pics of people waiting outside... yes please Nvidia, more abuse. EVGA seems to be a pretty decent company, and they quit Nvidia. I'm thinking they're the canary in the coal mine.
I read this release 3 times and still don't know what's happening. What does "pressing the unlaunch button" mean? Discontinuation? Rebranding to RTX 4080? What?
There will not be a 4080 12GB card with the specs it was announced with. Basically, "pressing the unlaunch button" is exactly the opposite of doing the announcement. An attempt at "we take back what we said, imagine that nothing happened".
These big companies really need to get naming input from someone other than marketing teams. The second the 4080 and 4080 (not a typo) got announced, Nvidia was shredded by the media. It was immediately and obviously clear to basically everyone that this was a bad naming system and only a bunch of navel gazers could have thought it was "good".
I get that Engineers tend to be more practical in their names, and don't have the finesse that marketing is looking for. But at least some sanity checks would be good....
There was a golden age in the '00s when it was possible to get the gist of what Nvidia and ATI card names meant without consulting a very dense table. It was nice.
Are you maybe thinking of CPUs back when they were marketed by clock speed? Because GPU naming has always been a mess. In the mid 2000s for example you had the Nvidia Geforce 7 series with product names such as: 7800 GS, 7800 GT, 7800 GTX, 7900 GS, 7900 GT, 7900 GTX, 7900 GTO, 7900 GX2. They've been moderately consistent with "bigger numbers in the name = higher end card" but beyond that you can't tell anything meaningful without comparing the cards in a table.
It was amazing actually. Intel's marketing was so spectacular. Blue man group. Bunny suit commercials. Pentium, what a name. Intel Inside, those two words start an uninitiated jingle in my head.
This is not looking through it with rose tinted glasses and nostalgia. It was objectively better, fun, straightforward and iconic. Not a single person knows what Intel's (or AMD, nVidia, Apple, etc.)'s advertisements after 2000's. Do you remember the last Apple ad? No. It is all generic, designer bullshit.
All of it has gone to toilet. Marketing people have lost it across the board.
I agree. I think a deeper problem is it takes a Ph.D in Intel / AMD branding to understand what to buy. An 80486 was faster than an 80386, and 33MHz was slower than 66MHz. It was simple.
Intel's i7 line-up goes from 2 to 16 cores, 1-4GHz, spanning 13 generations. Toss in i3/i5/i7/i9, and lines like Atom and Xeon.
Each time I need to upgrade my computer, I groan. It's not just less fun, it's positively miserable.
Most people I know either buy the cheapest possible computer, or an Apple. I don't know why Intel thinks anyone will spend extra if they have no idea what they're buying. Most non-Apple users I know have phones with faster processor, higher-resolution displays, and for higher prices than their laptops.
Agree on the misery. I was speccing out a build and inadvertently picked a 2019 processor because it was extremely unclear.
(I'm now actually looking at an AMD 7700 rig, because intel won't do ECC on "desktop" CPUs, except for a rare chipset that I can't find a mobo for sale at the moment...)
The 13 generations is particularly bad, if you're just trying to comment to someone looking for a used system, when half the time they just list "Core i7" which is meaningless without at least a model generation.
It's the Packard-Bell marketing strategy. Confuse the marketplace with a profusion of similar models so that comparison shopping can't be easily applied by casual buyers.
That strategy works well in a lot of places, but it's not what's going on here. This is just plain old incompetence and mismanagement. It's a mess, rather than a strategy.
For Intel, it just results in most buyers buying the cheapest possible system since there is no way to tell what's what. Intel would make a lot more selling $400 CPUs than $50 CPUs, but to do that, people would need to see the value.
Oh please, AMD CPUs had lower clocks so to compete with Intel's (making up numbers to illustrate the point) 2.3Ghz where theirs was 2.1Ghz, they would call it Athlon 2300 or something to the effect.
They may have had a point that their 2.1Ghz was as good as Intel's 2.3Ghz chip, but it's not been straightforward, probably, since a 286.
(Edit, I meant to reply to the parent comment)
To be honest without looking it up, all I remember is stoner Steve. Dude, you’re getting a dell vs the Jeff goldblum mac ad that showcased pc cabling vs simple Mac. Still smile when I think of stoner Steve.
Yes, and dear God am I sick of it. AFAICT, they've bought all advertising space on the web, mobile, and TV for me, at the moment. (It's the one of the iPhone auto-dialing 911 in a wreck.)
Yeah, even if they had a good reason not to call one the 4070, the whole thing could still have been avoided by just calling them the 4085 and 4080. And the marketing people could probably have come up with something even cooler sounding, if somebody would have just stopped them from going with 4080 12GB and 4080 16GB.
The funny thing is Nvidia already has 2 sub-part-numbers for better-than-the-xxx0-cards, without creating another line of xxx5 products. The 16GB could have been branded 4080 Ti or 4080 Super with the 12GB being the 'base' 4080.
That was my thought... they should have just called it a "Super" still leaving room for a Ti model later. Or bring back GS designation after... 4080 and 4080 GS. They had lots of options to add distinction.
But that's not the point. It's not meant to be intelligible. The point is marketing, aka to misinform consumers. It's working as expected and it happens in every field.
Choosing obscure names that make it extremely hard to compare characteristics within products by a company, much less to compare to outside competitors, is not a bug --- it's a feature.
Try buying a bike and figuring out how to compare it to other bikes by the same manufacturer from this year or last, or try to figure out what features it carries. You're left doing what you always do: staring at 7 tabs with spec sheets and slowly trying to absorb the features of the various "poorly" named offerings
It's anti consumer and I'm surprised there's not more outrage, given that a market purportedly should consist of rational consumers making informed decisions.
Why do they call it that then? I never really looked into it that much and just took as a measure of a certain type of compute capability ( FP16 or FP64 right? )
The SIMT architecture makes it look to the programmer like each FPU is a separate core, but all the cores in an SM have to run in lockstep to get good performance.
I'd be shocked if the original names were engineering decisions. Seems blatently obvious that marketing just re-badged the 4070 at the last minute and it backfired.
Clearly some departments of Sony have engineers naming things. No marketing team would put out a product names the "Sony WH-1000XM4" not to be confused with the "Sony WF-1000XM4".
Overall Nvidia generally has a very good naming system. They are easy to understand if you look at them for more than a minute. Nvidia is 4090? 40 = Generation. 90 = Model. Higher model # is better. They've stuck with the general concept for the better part of 20 years.
Intel's naming is decent. Their cutsey names like Sandy Bridge, meh. No one can never remember those. But the Core numbering system is solid. i3 is lowest. i9 is highest. The processor numbers after that can be a little hard and do require a bit of a decorder matrix to understand. But as long as it's a system, with rules, that they follow, and can be explained fairly easily - I'm ok with it. Heck they have a page that gives you the magic decoder ring: https://www.intel.com/content/www/us/en/processors/processor...
Except Nvidia uses the same branding for it's mobile and desktop chips, and in the past have rebadged different architectures under multiple gen numbers. (GT 510/520/605/610/710 all the GF119 chip)
Sony’s naming problem is not because of engineers; It’s clearly the marketing team, and the goal is most certainly to make this incomparable, across continents, across years, or between the one that was given to the reviewers/journals/comparators and the ones that the customers can actually purchase.
Sony’s problem is that they try to sell bad products for the price of expensive ones, and the best way to do that is to have incomprehensible names.
I think the goal is more so that big chains can sell model numbers that nobody else sells, making it risk-free for them to promise “we’ll match any cheaper price”.
For what it's worth, I bought a high end TV recently, the Sony Bravia A90J. I've left out some of the full product name, but this info is all you need if you care to look up that TV.
When I was looking in physical stores, at physical devices, I noticed that there were important differences between the [A-Z][8-9]0[A-Z]s, when I would research the model numbers online. 80 vs 90 indicated jumps in overall quality, depending on the other letters in the model name, which usually meant that the product was created specifically for the store (like Best Buy vs Costco vs buying direct), and would have other minor differences from the 'true' version.
A regular person would have probably just looked at the TVs in-store and decided based on whatever looked best, but I happened to have some specific features I wanted, and the weird-ass model names helped.
TV naming is especially crazy. They have variants for everything from geopraphical location to specific sales events.
My TV lacks the ability to transmit audio via Bluetooth (no, I can't enable it, I think it actually lacks the module). Nobody could have told me that before I bought it, the marketing material and manuals all claim that it has it. There is precisely NO documentation for my specific model.
I'm starting to think that they're actively counting on people not completely testing their devices after getting them.
I bought a TV from Fry's. There's no mention in the English manual, but according to the internet, this model has a DVR built-in, but it only activates if you tell it you're in Brazil when you first set it up.
The A90J is the top model right? Was looking at those myself recently. Amazon warehouse occasionally has a cheap deal on one but I am always scared those probably have dead pixels.
I really wanted a Panasonic Plasma but it looks like the sole importer may not be getting them anymore or might be getting less. But from what I understand the A90J and the top end Panasonics are the best in that they have a much better heatsink
A90J is, by the research I did and the word of the person who sold it to me (a family friend, has owned a TV business for 25 years, and gave me his at-cost price), the best. I absolutely love it. And yes, the panel + heatsink are top notch. Some other models/brands use the same panel, but lack the stronger heatsinks, and aren't able to utilize it as best as possible.
It runs Android TV, which may or may not be a dealbreaker for you, but I enjoy it enough. I just wanted to be free of a vendor-specific TV os, in order to give myself more flexibility when I try to set up a pi-hole in the future. There's also a hardware switch to disable the TV's microphone.
Also, the sound comes out from the panel itself, and is (to me) great. It calibrates itself using the microphone within the remote, by having you hold it a certain way when performing setup.
Finally, there's an incredibly posh and satisfying 'click' noise when you turn it off. I don't know why, but this makes me like the TV more.
I think you need to at least also consider the generation along with the bucket for Intel CPUs. For most users a 12th gen i3 is better than a ninth gen anything, yet plenty of retailers kept old laptop skus around long enough you would see both side by side at a retailer
No I cannot agree that Nvidia naming system is any good.
First, one could think that larger number means larger performance but this is not always the case. Second, one might think that "Ti" or "super" GPU is better than a regular one in every aspect but this is not the case also.
The same is about Intel. The best naming scheme is where numbers reflect performance or core count or cache size so that they can easily be compared by a consumer.
When the Xbox One was announced, people complained that it was confusing, but really it had been long enough since the original Xbox that the name was just silly, not confusing.
The One/Series S/X crap is genuinely baffling, totally incomprehensible unless you've really been keeping up with every Xbox release. You can go on Wikipedia and figure it out in a few minutes, but...you should not have to do that.
In Sony's defense, everything else with the PlayStation was actually pretty straightforward. PS1, PS2, PS3, PS4, and PS5.
"PSOne" was a weird way to brand a slim console, but it's still obvious that it's a PS1. And while Sony did originally use PSX to refer to the PS1, that was an internal codename, i.e. "different from the Nintendo PlayStation[0]". The gaming press ran with it because people in that era insisted on awkward three-letter acronyms for all games consoles. Reusing it for a weird PS2 DVR combo unit is still way better than Microsoft launching two different consoles with the same name.
[0] The cancelled SNES variant with the also-cancelled Super CD add-on built-in, both built by Sony.
It's remarkable how thoroughly they managed to outdo the confusing nature of "One". Who would look at "Xbox Series" and think that's the name of a specific generation? It's an artistic masterpiece.
Context is required for basically all product names, unless they've managed to make themselves generic. Ex https://www.businessinsider.com/google-taser-xerox-brand-nam... . Even then, if they are "generic" they still often require context of a specific country or language.
If I ask you about a Mustang, what do you think about first? Are you into cars and it's a Ford Mustang? Are you into Horses? Are you into Planes? Or maybe you're into ships? Heck, there is an entire list of options: https://en.wikipedia.org/wiki/Mustang_(disambiguation)
A good name is memorable, not necessarily descriptive. Most product and company names today are made up anyways. Or they are named after something else in a completely arbitrary fashion.
The problem comes when a company establishes a name for one thing, then uses it for another. The iPhone is a good name in concept. Pro/Max/Ultra/Mini not withstanding. But what if tomorrow Apple said there was an iPhone Super Ultra Max that was 10" and couldn't make calls. People would argue that was an iPad and that this new Super Ultra Max was a stupid name.
It is weird because Nvidia clearly has an instinct to give their cards car names (with the GTX, GT, RTX, etc etc stuff). They should just get rid of the numbers for the most part.
4090 -> 2022 Nvidia Optium
4080 -> 2022 Nvidia Melium
4070 -> 2022 Nvidia Bonum
4060 -> 2022 Nvidia Benem
(I barand-name-ified the latin words for best/better/good/okay).
The problem with the numbers is that we expect them to have some meaning. There's no inherent ordering between maxima/altima/sentra but if you are shopping for Nissan cars you figure it out. If you are spending a couple thousand dollars on something you shouldn't pick at a glance, you should look at the specs.
Bonum -- apparently that's the latin word for good? I dunno I just dropped words into google translate and then hacked off letters at random to fit the pattern. I'm sure they can come up with better fake words.