The real payoff is for VR, where you need a huge frame rate so that turning your head rapidly doesn't show judder. A rule of thumb for 24 FPS film production is to pan no faster than a full image width every seven seconds.[1] (I've seen 3 seconds elsewhere; this is RED's number.) This is too slow for interactivity and far too slow for VR, where you can turn your head fast.
(A related idea in the past was FrameFree compression. The idea was to delaminate an image into layers, and then morph the layers from one frame to the next. This allows arbitrarily slowing of motion, generating fake intermediate frames. The problem is cutting the image into layers. This needs some kind of vision-oriented "AI", which was hard back when FrameFree was developed around 2006, but is much easier now. That was the genesis of this idea. Auto-delamination is also used in the process of converting 2D movies to bad 3D movies, a fad which peaked about a decade ago.)
With computer graphics, you have a depth buffer, and have depth info for each pixel. So you can separate things by depth directly, without guessing. Also, compute is related to the number of pixels, which is constant, not scene complexity. So it's a near constant time operation.
This will probably be a standard feature of VR headgear soon. Beats adding motion blur, which doesn't really work for VR, because you can turn your head while the eyes track a target. Suddenly the target goes blurry, for no good reason.
On to 1000 FPS! Maybe it is possible to have VR that doesn't nauseate 5-15% of the population.
The reason gamers want higher framerates is mostly for lower latency, faster reaction times, and a bigger advantage in online play. DLSS 3 'fakes' it's higher frame rates. As a result you might have 120 fps, but it's going to feel like 60hz. Personally, that inconsistency would drive me crazy. I'd rather use DLSS 2 performance than DLSS 3 quality.
> The reason gamers want higher framerates is mostly for lower latency,
"Gamers" are absolutely not a monolithic group, but in general, they mostly want higher frame rates because higher frame rates does a better job of feeling and looking fluid.
> faster reaction times,
60 fps is 16 ms per frame. The end to end latency for a decent video game setup will be ~50 ms, and it's pretty well decoupled from frame time. The actual frame time will only constitute a small fraction of the overall latency. Most setups will have an end-to-end latency of 100ms. So the difference between 60fps and 120fps is even less cogent.
> and a bigger advantage in online play.
Most gamers play in 30fps or 60fps on console. There is definitely a lot of folks playing framerate sensitive games (like CS:GO) on PC which do benefit from increased framerate, up to ~300fps (above that gets complicated). But the vast majority of gamers are limited by skill and will see marginal benefit above 60.
> DLSS 3 'fakes' it's higher frame rates. As a result you might have 120 fps, but it's going to feel like 60hz.
Visually it looks a lot like 120. Digital Foundry has a video on this. Outside of definitely noticeable visual artifacting 120 fps with DLSS3 looks and plays like... 120 fps. Saying it feels like 60 is a wild thing to say.
It's possible that all games today running DLSS3 poll the input at the true frame rate, rather than that with the frame gen, but if we're talking about the difference between a 60 poll per second versus a 120 poll per second that difference in latency is only going to affect the top tier of players in the kinds of games where millisecond latency matters (an inherently niche set)
You could say that all gamers want the absolute best possible 300+fps setup to their online play, but where they're not primarily limited by skill, they're going to be predominantly limited by budget. Doubling your framerate from raw compute alone will more than double your cost.
There are valid criticisms to DLSS3. I've never heard or seen anything you're mentioning here.
Source: I'm a graphics engineer in the industry (Not Nvidia)
I play fps shooter (Overwatch 2) weekly at minimum. I’m far from being ”pro”, advanced hobbyist rather. When Windows spontaneously decides to reset refresh rate back to 60, I’ll notice it the second I aim by flicking my mouse. In addition to strobe-effect of low fps, the increased input lag in mouse controls is absolutely noticeable and really messes my aim. So no need to be pro to enjoy (and notice) increased responsiveness in fps games as long as you’ll clock a few hours weekly to it!
I know nothing of DLSS (I use AMD parts) but 100ms sounds extreme to me. Like way above playable. Did you try Nvidia's reflex experiment? At 85ms it feels like the mouse is stuck in tar. I read that the increase in aim precision was near 60% and often up to 80%(!) in the low Vs high latency.
I don't know what my current systems total latency is, but I use a mouse with custom internals and an LG OLED screen for the lowest input lag. The difference between 60 and 120 Hz is huge even with my at least low-ish input lag system.
This isn't meant as a criticism, as I stated I have no clue how DLSS feels and I quite enjoy playing games on my PS5 too (with LG OLD and VRR), but the 100ms just sounded off to me.
Actually, before posting I just had a quick look at Nvidias article. Their example of worst system latency was 77ms and the best is 12ms.
> because higher frame rates does a better job of feeling and looking fluid
But a higher frame rate also does a good job of reducing input latency with the same number of frames in flight. Just try a game which uses the system mouse pointer and where you can move things around with the mouse. The difference in distance between the mouse pointer and the dragged thing at 60 vs 120fps is quite dramatic, even more so on 30 vs 60fps. This is just an obvious visual representation of what is a bit more subtle when using a first-person control scheme (which many non-hardcore gamers probably don't pay much attention to, but they still might wonder why one game feels "laggy" and another feels more "crisp")
I'm pretty sure digital foundry had indicated that DLSS 3 adds latency vs turning it off, so in the 60fps vs 120 fps example, it would actually feel like 45 fps.
I work with game programming, mainly graphics and performance. And in recent years mainly single player games.
My view is that the most important thing is a consistent frame rate, because stutter is jarring, breaks immersion and even worse, can cause unfair problems like missing jumps or a shot. Higher frame rates make for a more fluid experience.
It's already very common that visual update frequency is decoupled from the frame rate of the main game thread, and that really isn't noticeable as long as things are running smoothly.
In terms of visual/logic/input fps I've worked on games and simulations with 60/60/60, 60/30/30, 30/30/30, and even 60/60/1000 (surgical training simulations). They all felt smooth and good as long as the frame rates stayed consistent.
The real payoff is for VR, where you need a huge frame rate so that turning your head rapidly doesn't show judder. A rule of thumb for 24 FPS film production is to pan no faster than a full image width every seven seconds.[1] (I've seen 3 seconds elsewhere; this is RED's number.) This is too slow for interactivity and far too slow for VR, where you can turn your head fast.
(A related idea in the past was FrameFree compression. The idea was to delaminate an image into layers, and then morph the layers from one frame to the next. This allows arbitrarily slowing of motion, generating fake intermediate frames. The problem is cutting the image into layers. This needs some kind of vision-oriented "AI", which was hard back when FrameFree was developed around 2006, but is much easier now. That was the genesis of this idea. Auto-delamination is also used in the process of converting 2D movies to bad 3D movies, a fad which peaked about a decade ago.)
With computer graphics, you have a depth buffer, and have depth info for each pixel. So you can separate things by depth directly, without guessing. Also, compute is related to the number of pixels, which is constant, not scene complexity. So it's a near constant time operation.
This will probably be a standard feature of VR headgear soon. Beats adding motion blur, which doesn't really work for VR, because you can turn your head while the eyes track a target. Suddenly the target goes blurry, for no good reason.
On to 1000 FPS! Maybe it is possible to have VR that doesn't nauseate 5-15% of the population.
[1] https://www.red.com/red-101/camera-panning-speed