Hacker News new | past | comments | ask | show | jobs | submit login
Raytracing on Meteor Lake's iGPU (chipsandcheese.com)
130 points by rbanffy 8 months ago | hide | past | favorite | 32 comments



- How does this compare to the raytracing units in Apple A17 Pro/M3 series? They also provide additional eye candy for a relatively large cost I would say.

- Why are relatively expensive and large GPUs like in the RTX 3070 commonly called “midrange” online? The meaning of “midrange” seems to be creeping upward.


Within the market of discrete gaming GPUs, there has been a hierarchy from "low-" to "high-end" for decades -- since before integrated GPUs even existed. For almost 20 years, things with 192- or 256-bit memory busses have been "mid-range" (vs. 384- or occasionally 512-bit memory busses at the high end, and smaller at the low-end). NVIDIA's "7"-tier GPUs have historically been the top of the mid-range in this world.

Within this world "midrange" has been creeping upward not via big shifts in these fundamental characteristics but via:

1. Prices increasing steadily across the board, due to shortages and market power 2. Power budgets (and corresponding board/cooler sizes) increasing across the board

The fundamentals (memory bus width -- still 256-bit; die size and performance relative to the top of the line) remain "mid-range" in exactly this same sense.


Expressed differently: the memory bus and chip size of a 4070 is equivalent to a 3060, NOT a 3070.

Nvidia has shifted its portfolio to the left, so they can charge more for a smaller chip. I suspect this is mostly due to increasing manufacturing costs, not (just) pricing power.

This is also a reason why they are banking on AI upscalers to drive improvements in the future.


Put another way, they took the gains in technology and process improvements and banked them for themselves, releasing a new generation that achieves similar performance (in some cases worse) to last gen, albeit at an improved power envelope, at the same price as last gen, but that they have higher margins on.

The fact they can do this speaks to the lack of competition in the GPU market at the midrange and up. Compare this to the CPU market, where we now have Intel and AMD giving it everything to leapfrog each other every 6-9 months.

I don't begrudge nvidia wanting to spend a generation consolidating their market position - that's their right - just as it's mine to look at a GPU that performs roughly the same as one I bought 7 years ago at almost the same price and say "no thanks".


> a GPU that performs roughly the same as one I bought 7 years ago

Out of curiosity, which GPUs are you referring to, and how are you measuring or comparing perf?


1080Ti vs 4060Ti. Factoring in nvidia's price creep and a worse exchange rate, the 4060Ti costs today roughly what I paid for my 1080Ti 7 years ago.

I'm comparing perf by looking at sites likes userbenchmark and reading/watching reviews of newer cards. I'm running into games I can't play at a level that I'm happy with, but based on reviews I don't think a 4060 or 4060Ti would make a meaningful difference.


That’s a very tricky comparison at best, 3 gens up but 2 product lines down. Comparing 1060 to 4060, or 1080 to 4080 might be more fair, even accounting for the skew in product lines.

I’m not sure how you’re doing the price comparison. 1080ti launched at $699 (with inflation that’s $870 today). The 4060ti launched at $399. Why does that seem the same to you?

We’re in a ray tracing thread and for ray tracing, there’s no contest, the 4060ti wins there by a long way.

UserBench says today the 4060ti is 17% faster for 20% lower price today, which is about 33% better perf per dollar… which interestingly tends to match their user score and sentiment.

https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4060-Ti-vs-...

Also worth noting that some old games are bottlenecked on the software, not on the GPU, so it’s not surprising to find a game that was around for the 1080 or before and doesn’t get any better with more modern GPUs. Comparing the 1080 to the 4080, fp32 perf went up ~5x, and due to Amdahl’s law, game benchmarks tend to see more like ~3x in overall system perf.


> Comparing ... might be more fair,

I don't make purchasing decisions based on what's 'fair', I look at what capabilities I have, what I lack, and what my budget is.

> Why does that seem the same to you?

Exchange rate. But you make a valid point that I should also factor in inflation.

> We’re in a ray tracing thread and for ray tracing, there’s no contest, the 4060ti wins there by a long way.

It wins in the sense that it can do ray tracing while the 1080Ti cannot, but based on my understanding, the xx60-tier cards aren't viable for raytracing at 60fps without DLSS anyway, and I'm still not convinced I want to consider DLSS performance to be the number I care about in my purchasing decisions.

> UserBench says today the 4060ti is 17% faster for 20% lower price today,

The most important question I look at is "what games can I run at what resolution" and I don't think there's a single game I can get acceptable performance at 4K on a 4060Ti or even a 4070, that I can't get acceptable 4K performance on a 1080Ti. There are probably a couple that I could run at 1080p that I have to run at 720p currently. I'm not using a variable/high refresh rate monitor and i'm not playing competitive shooters.


Nvidia doesn’t control the exchange rate, so if it really changed by 2x, then everything you said above applies to AMD or any other GPU company too. When comparing today’s prices for both GPUs, the exchange rate is irrelevant since it’s the same for both.

> I don’t think there’s a single game I can get acceptable performance at 4K on a 4060Ti or even a 4070, that I can’t get acceptable 4K performance on a 1080Ti.

This is a very different claim than you made above. Yes, all the GPUs since the 1080 can get “acceptable” 4K performance on most games that will run on the 1080 (I assume acceptable means >= 30fps), so that’s not surprising. This is ignoring the game quality settings. The benchmarks you pointed at demonstrate that if the settings have a 4060ti running just barely at 30fps, then the 1080ti will not be able to hit 30fps with the same settings, because the 1080ti is 17% slower.


Guessing this mostly comes down to exchange rate for you.

As sibling pointed out, the $699 1080 Ti would be $870 today in U.S. dollars.

The 4060 Ti is also a remarkably bad product (for the price). If you swap in the 4070, you're looking at a launch MSRP of $599 which is less than the 1080 Ti even before inflation, and about 50% faster (and over 30% faster than the 4060 Ti.)

Still not mind-blowing given the nearly 7 years between launches. There's a big delta going up to the 4080, both in price and performance. On UserBenchmark it's "135%" faster than a 1080 Ti, but at a mind-boggling $1200.


> releasing a new generation that achieves similar performance (in some cases worse) to last gen

Afaik the new gen still manages to improve over the old one, albeit modestly, do you know of an example where is that not the case?


See this GN review: https://youtu.be/WS0sfOb_sVM?t=666&si=Xt62b_2BfQM-fnuH

Specifically, at 4K, the 3060 outperforms the 4060 in Cyberpunk. Most of the charts do show a gain, but I'd describe it as marginal rather than modest.


Yeah the smaller memory interface hurts at 4k, that is unfortunately to be expected. Raw rasterizer/raytracer power shouldn't regress It think.


> Why are relatively expensive and large GPUs like in the RTX 3070 commonly called “midrange”

Because the 30xx generation of graphics cards includes the 3050, 3060, 3070, 3080 and 3090. The 3070 is right in the middle of the range. Hence, midrange.


And also because it's a generation old. A card like 1080 was high-end during its time, and still works well, but it isn't high-end anymore.

High-end gpus right now are the 4090 and 4080.


> The meaning of “midrange” seems to be creeping upward.

That has been the case since before I touched my first computer. The first mainframe I touched had 16 megabytes of memory, and my first desktop had 48 kilobytes. Both were midrange in their respective categories (although the mainframe was in the top in memory, it was only average in processing power - the Apple II+ was better than a VIC-20 and performed a little better than a C-64, but was slower than most of the "professional" personal computers).

My work laptop is midrange today and is many, many orders of magnitude more powerful than the supercomputers of that period.

To put things into perspective, my phone runs Unix on a RISC CPU closely coupled with an array processor.

That's Moore's law. It has slowed down a bit, in part from the difficulty of doubling density every couple years, but also from the fact even a 5th generation Intel i3 is vastly more capable than what the average user needs. In GPUs it's a little bit different, with games requiring increasingly ludicrous amounts of compute power, pushing the "good enough" range upwards every year.

If you really like to play Pac-Man, an 8-bit CPU and a simple CRTC should suffice.


Do gamers actually want ray tracing? Or is this something like bloom/post effects/motion blur where is computationally expensive and gamers who care about their k/d shut it off anyway to see the other team easier?


Do I want global illumination? Yes, absolutely. Will it make games substantially better? Only marginally. The core experience will still be the gameplay.

Still, the visual spectacle of what some of these products can create is a fantastic experience in itself. It can enhance immersion in a drastic way, and especially for story heavy games, that can act as a direct multiplier on an already good story and structure. Bad games will still never go past being just a tech demo.


I feel like graphics are chasing a dragon so to speak. I remember games of my childhood looking great, now they look like n64 games. I still had plenty of fun though.


From my experience it's the thing gamers enable after getting 120fps "ultra" in rasterization already. Rasterization-based estimations of many of the visual features are very good now, and most people I know tend to have a better end experience with higher frame rate than fixing some of the less-noticeable issues with them.

So it's probably useful if you've got a 4090, but beyond that more try it once, go "Huh, that's neat", and then disable it.

But there's always a chicken and egg problem for every new feature - it may make sense to push support even if it's not particularly useful right now, as once it's ubiquitous there may be more use cases for lower total performance cards - "light" ray tracing may be a better solution than some rasterization tricks for things like GI, or even things like AI line of sight or other non-specifically-graphics tasks.

But as gamedevs have to support the non-RT path anyway, it probably doesn't make sense to develop two separate paths. So it's relegated to "optional" visual features only.


I played Control on a 2080 Ti and for me the difference with RTX on was worth the compromise of lower frame rate. Mind you, it’s a relatively slow-paced game, not a competitive shooter.


The better question is if developers want ray tracing; for them, it can represent a massive time and cost savings in terms of lighting games.


Raytracing is currently not universally supported and slow, so it's used to supplement existing pipelines. It's more work, not less.


That's the story with all new tech. Difficult transition to a better tomorrow. Rinse and repeat.


Totally agree! We're currently quite early in the transition, and the tech is not ready to stand on it's own, but the future is bright!


Not every game (or gamer) is of the competitive type.

RT is pretty nice and more and more games are featuring it, problem is at least on consoles they cannot go crazy with it so some games only do reflections for example or just shadows or just global illumination.

Sony is supposedly releasing their Pro model of the PS5 later this year with improved support for RT.


I think people who kneecap their graphics settings to eek out the last few millis of latency are the vast minority.


Really? I feel like most gamers set their graphics to whatever will get them around 60fps. People buying high refresh rate monitors are also going to want to take advantage of that with a high framerate or even vsync.


With desktop GPUs I certainly thing there's room to improve framerate and image clarity before considering ray tracing. Maybe if you have a high end GPU and an an otherwise lightweight game it's worth it.

For a mobile GPU, like Meteor Lake? I wouldn't even bother.


For single player games it's probably still desireable. Cyberpunk 2077 with Ray Tracing looks amazing and enhances the gaming experience. Other than framerate issues, there's no drawback to having it on. For single player games, immersion is important.


Immersion is good but I think noticeable frame drops are more immersion breaking than not seeing a sunbeam


For someone who played outside growing up, ray tracing is a nice addition, the baked lighting always drove me crazy in my games. I put it up there when I noticed Physics engines stopped feeling like I was floating through a game.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: