Hacker News new | past | comments | ask | show | jobs | submit login

How far off are we from 8K gaming?



Quite far away. Even the Titan X Pascal is having some trouble maintaining a consistent 60 fps in all games.

http://www.gamersnexus.net/hwreviews/2659-nvidia-gtx-titan-x...

See the Mirror's Edge and The Division benchmarks. Averages are (just) over 60, but there are dips below.

Personally, I think the best resolution for current-gen cards is 3440x1440. Should be rock solid 60+ fps in all games, and gives the benefit of being ultrawide.


I haven't owned a gaming PC in close to 10 years and todays performance numbers look completely crazy to me. The last time I played a game I was happy that I could get 1024x768 smoothly with high framerates. Seeing that you can play 3440x1440 with 60fps feels like magic.


It's Amazing. Battlefield 1 on Ultra/Headphones/3440x1440 at 60fps is Incredibly Immersive.


Have you gotten to see the ultrawides that run at 100hz? They're pretty incredible.


Do you have a link to one of these?


There are two and they use the same panel:

* ASUS PG348Q (I own this one and I love it)

* Acer Predator X34 (My colleague 4 seats down has this one and she loves it).


No, but I'd like one :D


I strongly disagree that being ultrawide is an advantage. 16:9 is already wider than optimal for gaming. The human field of view is actually about 4:3. While there is admittedly usually more interesting content to the sides than the vertical edges, I've found 16:10 to be preferable for immersion and would never dream of going wider. If anything, I'd go closer to a square if they still made them.

I'd rather have 2560x1600 than 3440x1440, even though it's less pixels.

IMAX film format has a more immersive aspect ratio too, which is taller still at about 16:11.


I have an ultrawide not necessarily to be able to perceive all my content at once but that I don't need to have dual monitors anymore for putting two documents next to each other without compromising the width of each too much. Even still, I came from a 27" 2560x1440 monitor and the edges are still of value to me in peripheral vision in games. Add in that most 34" ultrawide screens now seem to have a curve to them and it makes visibility at the edges easier as well. Not having to setup an extra monitor and suffer the bezel in the middle is very much worth the troubles for me because otherwise I'd need 3x monitors and at that point it gets insane with multiple monitors in portrait and such.


Try a 4K TV. I bought a card with 4K HDMI2.0 (for 60 fps) out, plugged it into my 46" TV and never looked back. The only drawback is needing to use the remote to turn it on/off. The monitor shuts off with DPMS but when the machine is off it still searches for signal. Doesn't bother me at all


Well, I guess it's down to personal preference, because I would always prefer ultra wide to normal 16:9 for gaming. It's just a much better experience in every way, and going back to 4:3 is just miserable. Again, ymmv.


You're correct that we can only make out fine details in a narrow field of view, but that doesn't mean there isn't value to having more in our peripheral vision. Having extra horizontal width specifically is nice, because of how there is naturally more interesting stuff going on in that range; above and below you is just skybox and ground (for many first/third person 3d games), which you don't really need to see more of.


I'm loving the 3000x2000 screen on my Surface Book. Wish there were desktop monitors at that kind of resolution.


> The human field of view is actually about 4:3.

I believe it's more like 2:1 (so even 16:9 isn't quite wide enough).


There is a "regulatory" issue: some competitive games set or used to set a hardwired limit on the vertical FOV, and scale (or scaled) the maximum horizontal FOV to match the aspect ratio of the monitor (or multi-monitor setup).


Correct. 1440 @ 144hz is the sweet spot right now.


I personally prefer 1440p ultra wide at 100hz over 16:9 at 144hz.


Do you run into games that don't do well past 16:9?


Linus Tech Tips on YouTube recently got an 8K setup going using four 4K monitors and it ran surprisingly well.

https://m.youtube.com/watch?v=211Vdi4oC9o

The biggest limitation it seems is lack of displays because the gpu tech seems to be able to handle it.


Still far away. I'd say 2 cycles more at least.


And we need 8k basically to do VR at proper resolution, too, on each eye.


60 pixels per degree, 210° by 100° for full 20/20 vision (most people have slightly better)[0]. Headsets right now are ~12 pixels per degree.

Thats a 12600x6000 screen = 76megapixels. 4k is 8.3 megapixels, 8k is 33.2 megapixels. 8k/eye sounds about right.

[0] https://youtu.be/Qwh1LBzz3AU?t=23m39s


No, we don't. Latest GPU architectures including Vega (and Pascal obv) support rendering the scene once and then projecting it from two viewports thereby generating two scenes without having to render the entire scene twice.

Here is an article on Nvidia's implementation: http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1...


The pixel shaders still have to run twice, and especially at 8k the pixel shaders will be the vast majority of the work.


Surely some of the work should be possible to re-use? I mean for most pixels beyond a certain depth the incident eye vector direction will be identical for all practical purposes, so if one could just fudge it and use the same calculated pixel color for both eyes and just offset it slightly then it should be usable without having to be calculated twice. No one would notice if the reflections or specular lobe for the right eye were calculated with the indicent camera of the left eye.

Once you have calculated the pixels for the left eye, those should be possible to re-use for the right eye, with some mapping. Certain pixels that are only visible to the right eye will have to be computed. I'm not sure if it's possible or if it even has a chance to be a performance gain (or indeed if this is actually how it already works). Doing the full job of 2x4k pixels for two eyes when they are a) almost identical for objects beyond a certain distance and b) quality is almost irrelevant for most pixels where the user isn't looking.

With foveal rendering and some shortcuts it should be possible to go faster with a 2x4k VR setup than for a regular 4k screen when you need to render every pixel perfectly because you don't know what's important/where the user is looking. Obviously one needs working eye tracking etc. first too...


What you're describing is already a thing, UE4 just shipped an experimental implementation in 4.15.

Search for Monoscopic Far Field Rendering here: https://www.unrealengine.com/blog/unreal-engine-4-15-release...


I agree with you. There's no reason to run every pixel shader twice in full.

It seems logical that each surface/polygon could be rendered once, for the eye that can see the most of it (a left facing surface for the left eye, a right facing surface for the right eye), then squashed to fit the correct view for the other eye. Then, fill in all the blanks. Of course, the real algorithm would be more complicated than this, but it seems like at least some rendering could be saved this way.

Technically the lighting won't be right, but you don't have to use it for every polygon, and real-time 3D rendering is already all about making it 'good enough' to trick the human visual system, not to be mathematically accurate. If technically-accurate was what we insisted on, games would be 100x100 pixels at 15FPS as we'd insist on using photon mapping.


If we do eye tracking we can probably lower that to 1024x786 equivalent rendering, by using high resolution where the eye is looking and tapering off to just a blurry mess further away. You can even completely leave out the pixels at the optic nerve blind spot. The person with the headset won't be able to tell they aren't getting full 4k or even higher resolution. And we can run better effects, more anti-aliasing, maybe even raytracing in real-time.


If this is the nvidia/smi research you are referring too, well it seems nice but without details, specifically dynamic performance, and there is reason to be sceptical of how good it is.

The field of view of current consumer HMDs is too narrow for there to be a big saving compared to the downside. As you move to larger FOV displays the brain will start doing more saccades (rapid step changes in viewpoint[1]) and the response time of the image generator and eye tracker is too slow to generate more pixels at the right spot. It's much more effective to just render the whole thing at the maximum possible resolution. There has been promising research on rendering at a reduced update rate or reduced geometry in low interest areas of the scene[2].

Also, eye tracking is a massive pain in the ass.

[1]https://www.ncbi.nlm.nih.gov/books/NBK10991/ [2]https://kth.diva-portal.org/smash/get/diva2:947325/FULLTEXT0...


2 screens of 4k is not 8k though.


Yeah, it's not, but that's close enough - if you have a 8K capable GPU, then you should be able to run 4K on two eyes at very high frame rates.


we actually will need less gpu power for vr than for monitor gaming. foveated rendering will do the trick


Will be an exciting time once super high pixel count displays and eye tracking combine. What a time to be alive.


Check the framerates games get on 4k resolution, divide by 4, and then see how far away you are from at least 30fps.


8K anything is almost pointless unless you have far above average eyesight, like 20:5.

4K is the real sweet spot for typical human vision.


Perhaps for a desktop or laptop, but in VR it will make a tremendous difference. 4320p will be really nice for VR.


this card can do 4K pretty comfortably now, at least if 60fps is the benchmark (VR needs more) but 8K is still pretty far away. Other than VR, there is hardly a use case for it though, and even in 4K VR should be a lot better than it is now.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: