The real payoff is for VR, where you need a huge frame rate so that turning your head rapidly doesn't show judder. A rule of thumb for 24 FPS film production is to pan no faster than a full image width every seven seconds.[1] (I've seen 3 seconds elsewhere; this is RED's number.) This is too slow for interactivity and far too slow for VR, where you can turn your head fast.
(A related idea in the past was FrameFree compression. The idea was to delaminate an image into layers, and then morph the layers from one frame to the next. This allows arbitrarily slowing of motion, generating fake intermediate frames. The problem is cutting the image into layers. This needs some kind of vision-oriented "AI", which was hard back when FrameFree was developed around 2006, but is much easier now. That was the genesis of this idea. Auto-delamination is also used in the process of converting 2D movies to bad 3D movies, a fad which peaked about a decade ago.)
With computer graphics, you have a depth buffer, and have depth info for each pixel. So you can separate things by depth directly, without guessing. Also, compute is related to the number of pixels, which is constant, not scene complexity. So it's a near constant time operation.
This will probably be a standard feature of VR headgear soon. Beats adding motion blur, which doesn't really work for VR, because you can turn your head while the eyes track a target. Suddenly the target goes blurry, for no good reason.
On to 1000 FPS! Maybe it is possible to have VR that doesn't nauseate 5-15% of the population.
The real payoff is for VR, where you need a huge frame rate so that turning your head rapidly doesn't show judder. A rule of thumb for 24 FPS film production is to pan no faster than a full image width every seven seconds.[1] (I've seen 3 seconds elsewhere; this is RED's number.) This is too slow for interactivity and far too slow for VR, where you can turn your head fast.
(A related idea in the past was FrameFree compression. The idea was to delaminate an image into layers, and then morph the layers from one frame to the next. This allows arbitrarily slowing of motion, generating fake intermediate frames. The problem is cutting the image into layers. This needs some kind of vision-oriented "AI", which was hard back when FrameFree was developed around 2006, but is much easier now. That was the genesis of this idea. Auto-delamination is also used in the process of converting 2D movies to bad 3D movies, a fad which peaked about a decade ago.)
With computer graphics, you have a depth buffer, and have depth info for each pixel. So you can separate things by depth directly, without guessing. Also, compute is related to the number of pixels, which is constant, not scene complexity. So it's a near constant time operation.
This will probably be a standard feature of VR headgear soon. Beats adding motion blur, which doesn't really work for VR, because you can turn your head while the eyes track a target. Suddenly the target goes blurry, for no good reason.
On to 1000 FPS! Maybe it is possible to have VR that doesn't nauseate 5-15% of the population.
[1] https://www.red.com/red-101/camera-panning-speed