Hmm, so why would anyone buy a Titan X now? For 1 GB extra (slower) RAM? Did they hobble it in any other way?
Edit: According to Anandtech, "NVIDIA has been surprisingly candid in admitting that unless compute customers need the last 1GB of VRAM offered by the Titan, they’re likely going to buy the GTX 1080 Ti instead."
I would suggest changing the article link to Anandtech, as it's really a much more informative and better written article. Jen-Hsun Huang seems to have bottled his cringe-inducing presentation style and fed it to whoever wrote that PR piece on the Nvidia site.
http://www.anandtech.com/show/11172/nvidia-unveils-geforce-g...
I believe gamers who spend that kind of money on a graphics card do it more for cultural than functional reasons. It's like buying a car model upgrade, that has little practical value, aside from showing off that you could afford it.
No, currently monitor technology is evolving fast so GPUs at the are struggling to meet the highest interface requirements, notably 3440x1440@144Hz and 4K@60Hz (and don't even think about more frames at 4K).
I have a GTX 1070 mostly for deep learning, but I have to admit that piece of hardware was pleasing the casual gamer in me as well. It's only about 20% behind a GTX 1080 for gaming, a $500 GPU. Twice more than I had ever spent for leisure as a gamer.
Well on my 4K display, I can push most pre-2014 games to 60 fps, but that's it. After that, for most AAA games, I have to resort to 1440p + some AA. And if I was really into gaming I'd have bought an ultrawide 1440p/144Hz monitor, no doubt about it, knowing pretty well I'd need a Titan XP or at least 1080 to see these framerates on newer titles.
It's a space that's moving fast, both on the interface (monitor, VR...) and processing side (Nvidia makes a 20-30% leap each gen, kind of insane compared to Intel CPU gens for instance). We're only beginning to touch low-level APIs once again after a two-decades hiatus on the PC (DirectX being both great to make it easy to create 3D apps but at the cost of much performance/overhead, which DX12 is trying to address).
I'm jaded about CPUs, honestly Ryzen is much welcome economically but nothing to be enthused as a nerd/engineer, however GPUs and display technology is where it's at these days. Can't wait for proper AR too (HoloLens-like technology, still a few gens away from a commercially viable product).
I'm sorry but while I agree GPU and monitor technology is getting improved quickly, I think you're giving some incorrect info. Ultra wide 1440p monitors cap out at 100hz right now, not 144hz. The currents cards are pushing the 100hz ultrawides pretty much to their max and the 1080ti will 100% any slack left. 4k is possible past 60hz and a single 1080 is doing fairly well at 4k as is. 1080s in SLI can easily push a 4k monitor past 60fps and Asus is releasing a 4k 144hz monitor around Q3 of this year because of that. The tech is moving fast but you're exaggerating quite a bit on how hard it is to push these new displays.
4K is not really possible past 60 FPS with current games on highest settings. And those are what the target group of gamers want, I see that basically every day.
For an example, take the latest Deus Ex: http://www.techspot.com/review/1235-deus-ex-mankind-divided-.... It is also a good example for the next statement: The heaviest games don't even get 60 FPS on 1440p, on Ultra the average with a GTX 1080 is 43. 4K is at 24, basically unplayable. Even if you were to use SLI, which is by now pretty much agreed upon to be more a hassle than worth it, it wouldn't be at 60 FPS. And that disregards that 2x1080 are/were 200% the price of a single GTX 1080 Ti, and won't be close to getting 200% the performance.
You can see that as you want, but I can pretty much guarantee that gamers, at least the segment of gamers that would even consider investing hat much into a gpu, will absolutely sprint to get a GTX 1080 Ti, if the performance increase is real. They need it for VR + 4K, or at the very least believe so.
As someone who plays things at 4k on my 1070 at home, let me add that things still look freaking amazing at 4k, even without Ultra settings. You can usually turn off antialiasing (sweet), and there are some effects that you probably might not notice (or care about) compared to speed.
The best part is, in two years (or four) when you buy a better card, all the titles that are currently new will tend to look about as good as 2-year-old titles look now. ;)
I have a 3440x1440 at 95hz (my Acer Predator X34 didn't quite meet the 100hz claim..), and 2x 980s. One of the 980s isn't fast enough to drive it properly; two work fine, but I'm almost always having SLI issues it seems. Therefore the 1080TI will be a massive improvement to me, especially as I run 2x 1920x1200s around the 34" and would like to use frameless windows - which essentially reduces the 2x cards to 1x (SLI only works in fullscreen). For example Elite Dangerous will start to stutter when only using one of the cards.
I've crunched the numbers and the 1080TI will essentially offer the 980 SLI performance at its best - and that's before the extra RAM and Pascal features come into play - as well as the reduced heat levels etc.
So count me in, especially when Vega comes out and presses the prices further south. (Alas my monitor uses GSync, so that won't be an option for me.)
FWIW I'd like to mention that the 34" screen is completely brilliant for programming (Visual Studio), which is what I use the setup for most of the time.
You can't get any games above like 30 fps on 4K monitors even without maxing out graphics options on a $350 card today. Add multiple monitors... slows to a crawl. You can hear the fans working to keep up with non-gaming tasks...
While I do feel like noticeable CPU performance enhancements have mostly plateaued, I think we are 4-5 years away from affordable and good 4K graphics cards. The choices on the market at present aren't all that amazing. You either pay a lot for meh, or a fortune for acceptable.
Nothing on the market today is like, "OMG it handles anything we throw at it without breaking a sweat..." And that's fine... it's usually 2-3 years after a game comes out that I can run it at max settings.
SLI/CF is a huge hack - only allows true fullscreen, does weird tricks to render on both GPUs, occasionally gets glitchy results and far more crashes. You do not want to use SLI unless you have no other option.
4k@60 gaming on a single card is an excellent goal. I'm running a single 1080 for this purpose at the moment and on many games it struggles and I have to reduce options significantly.
The 1080Ti should be performant enough to run 4k@60FPS on most games on a single card. The 1080 was close, running ~45-50 FPS, but this should push it over that edge.
Then there are the people who want 4k@144 and stuff who'll be running 2 of these in no time and still not getting the numbers they want...
Hopefully within a few iterations 4k@60 will get down to an affordable level. I just really want to go back to buying $300-500 cards instead of $1k cards and this means we're on the way.
I have to disagree, there are couple consumer grade 1440p 144Hz monitors (FreeSync) - ASUS MG279Q, Acer Predator XF270HU and even 165Hz one - Acer Predator XB271HU (G-Sync) all IPS. There are also VA Ultra Wide panels like BenQ XR3501 and Acer Predator Z35 with 3440x1440px and 144Hz. Technology in this area is recently progressing with decent speed.
BenQ XR3501 and Acer Predator Z35 are 2560x1080px, not 3440x1440px. The other three are 2560x1440px, not ultrawide 3440x1440px. Parent is correct that fastest 3440x1440px displays available today are 100Hz.
None of the ultra wide 1440p monitors have 144hz. Almost all of the models you listed are 16:9 monitors and the one ultra wide is not 1440p. I agree that the tech is progressing but this generation of GPUs are pushing them quite well.
Why is your focus on ultrawide monitors? I'd bet most gamers do not use them for various reasons. You may have one but it is not the standard by which to gauge the viability of the GPU market. For example, Acer has a 240hz gaming monitor out now.
Two reasons: first, the earlier post specifically called out that, "...ultra wide 1440p monitors cap out at 100hz right now, not 144hz."
Second, it's all about pixel count. While 4k is still miles ahead of even the 3440x1440 ultra-wide resolution, that resolution represents nearly as big a jump in pixel count as the jump from 1080p to 1440p.
The goal is 4k at a minimum of 60fps. Consistent 100 hz or even 144hz on an ultra-wide 3440x1440 monitor represents a solid step in that direction.
I have used 4k for games for years and have never had trouble getting 60fps in every game.
With a 1060, when I really started gaming, it was pretty hard to have very good settings and get 60fps. Overwatch on Ultra ran at about 50fps (100% render scale), GTA 5 on the "nvidia optimized" settings was almost always 60fps but would drop down to 30fps sometimes. With a 1080, everything is always 60fps, even on Epic with some modifications (fog distance I think has the biggest effect). My attempt to max out the settings to just get 60fps has led to slow framerates (30-40) during hero selection and the "play of the game" intros, though. Only sometimes, it's very strange. (With the nvidia-optimized settings, this doesn't happen, though. But the graphics aren't quite as good in-game.)
Overall you should basically be able to pick any GPU and play any game at 4k. I would like to get another monitor for 1080p/1440p at 144Hz, but don't have anywhere to put it, so 4k@60 is enough for me. IPS panels are nice for everything except gaming, so I live with the compromise when playing games. Screen refresh rate is not the limiting factor in my ability, not by a long shot.
> I have used 4k for games for years and have never had trouble getting 60fps in every game.
Try Battlefield 1, or some other last-gen game (I believe Rise Of The Tomb Rider is also in this category). You won't get 4k60 on that one without a Titan X or a 1080 Ti.
That is, with the image quality settings maxed out, of course.
Completely disagree. A single 1080TI will finally be "good enough" for a lot of high resolution / high frequency use cases where you had to go SLI before.
I find it useful to multiply resolution with refresh rate to arrive at a number that indicates the number of pixels being pushed per second:
So there you go - a maxed out 34" ultrawide has just about the same requirements as a 60hz 4k monitor. Yet a normal 27" 144hz will out-require both of them ... and they come in even faster variants yet!
Edit: And then you start seeing 144hz 4k screens; you will still find yourself wanting with a single 1080TI.
Even dual 1080 GTX cards can't max out settings on the VR games currently available for Oculus / HTC Vive.
As for traditional usecases, I have a 3x monitor setup ( 2K@144Hz/4K@60Hz/2K@144Hz) and my 1080 GTX can't max out everything smoothly while I stream/record high-end games.
I would love it if all this was just vanity, but its my job. This stuff is an expense I'd happily avoid if possible. I don't own any nonsensical RGB lighted cases or non-essential peripherals.
I have a 4K screen and a GTX 1080, I struggle to play most high end games at a stable 4K 60fps with it. VR is an other use case where the extra power is probably not wasted. DOOM 2016 maxed out at 4k is a sight to behold.
Can you tell me (honest question) what improves with 4k for games? I mean, for regular stuff you can see more details on the picture, but games usually just render essentially same picture only for larger resolution. And picture with fullhd and modern capabilities is also already pretty smooth, so does 4k bring anything (I guess it can add a bit of smoothness as additional step of AA)? Is it worth the trouble?
At the current price I don't think it's really worth it, no. Really few games manage to take full advantage of 4K (Doom '16, as I mentioned, is one of them). Too many console ports with low texture resolutions and terrible optimizations these days. Older games will of course fare much better but the additional sharpness in the graphics is probably not worth the price of the hardware. For gaming I think 1440p and a high framerate are a better compromise at the moment.
That being said I have some trouble going back to my 1200p screens at work after being used to 4k at home. Surprisingly even for emacs and other work-related things 4K makes a pretty huge difference in my experience. Everything remains razor sharp even with a small font. I can split my screen and display a few full page datasheets and schematics alongside my code. No need to zoom in or anything. No aliasing, no pixels, I find it very relaxing.
Those hold up pretty well at 4K. In contrast I present to you Wolfenstein: The New Order which often looks very mediocre and yet manages to run worse than DOOM:
Everything has More Detail, sometimes more than you care about. In some ways this feels like it's Not Important, except you definitely notice it, especially if your game has sliders that determine how far away they start doing detail culling by using lower-resolution models/textures. I can safely turn off anti-aliasing everywhere, which basically seems to balance that there are 4x as many pixels. ;)
World of Warcraft: Everything looks the same, except text is prettier, item textures are more detailed, and spell icons suddenly have noticeable details. ("Hey that's a guy with a shield! I never noticed that!")
Dishonored, or other FPS-es: Far-away details suddenly are not lost in the haze of antialiasing. Wires and greebling on bridges are visible, and Look Good. HUDs and ammo displays seem to universally SUCK -- they are often not scaled in older games.
All the face textures in most older games end up supporting the higher detail textures, but it gets closer to the uncanny valley. (I notice this a lot in bioshock infinite.) I haven't re-played Wolfenstein.
It's much like looking at the same picture or webpage on a retina screen vs a non-retina screen. Text looks better, some other things look much the same. Anything vector-based (HUDs, etc) might look better.
For 3D the biggest improvement is you have more pixels to render objects at long distance, which is huge for games with large camera viewing distance. For example, you can spot enemy much easier in Battlefield.
I see your point. I guess it also requires much larger screen. Because if 1 pixel man will be moving on my 27-inch fullhd screen, even though on 4k display it will be 4 pixels, it still be same physical size and not much easier to notice :)
Demonstrably false, 4K 60fps gaming on max settings on most games (1080p@60fps is nothing on most high end GPUs) is still not really possible, and a 1080Ti probably won't achieve it either since the Titan X wasn't quite able to.
Also, VR gaming can never have enough GPU power, because supersampling increases the fidelity by a lot (first hand experience)
> Demonstrably false, 4K 60fps gaming on max settings on most games (1080p@60fps is nothing on most high end GPUs) is still not really possible, and a 1080Ti probably won't achieve it either since the Titan X wasn't quite able to.
You literally just argued why it is demonstrably true. If a Titan X, Ti, or standard 1080 can get to 4K 60 on max then buying an Titan X or Ti is nothing more than bragging rights.
Nobody is arguing that the Ti isn't more powerful, just that that power likely doesn't buy you anything specific, all you've done is show how that is true.
My brother and his friends are PC gamers, and they were anticipating the Pascal 10x series for over a year because of the improved performance.
Most of them upgraded from the two earlier series, because to get 1440p gaming on the new 144Mhz monitors and with high settings on modern games you really need these new cards.
I had similar thoughts to your own re: the hype, until I saw the games. They are amazing - and I find VR boring after 10 minutes.
Mainly H1Z1, Battlefield and Overwatch but I also saw GTA V, Fallout 4, FIFA, Battlefront and Witcher.
Both the games that are designed for the higher resolution, as well as the games that 'scale up' (can't recall which was which) are amazing. It's a completely different gaming experience and I understand now how people become obsessed with frame rates, specs, cards, the unveils, the news and rumors etc.
I was so close to pulling the trigger myself even though I don't have a PC. I could probably justify the $500 + $500 for card and monitor on an existing PC (or a Mac upgrade clears throat) but not an entirely new machine to play games.
If you haven't seen cutting edge PC gaming go and check it out somewhere or somehow. There was this entire revolution happening that I knew nothing about.
>It's a completely different gaming experience and I understand now how people become obsessed with frame rates, specs, cards, the unveils, the news and rumors etc.
I also partly think this is cultural, as the original commenter noted.
Last year shortly after Overwatch came out I built a new gaming PC, featuring an i5 6600K, GTX 980Ti, and 27" 1080p 144Hz monitor. My buddies gave me some flack for it because even at the time (~June 2016) these were not the top of the line options. Nevermind the fact that even with all settings on Ultra I was able to consistently get 120-140 FPS, and with settings turned down to their most basic (a common practice among competitive gamers to decrease input latency) I never dropped below 144, even in the middle of the most dense game play.
Granted, I'm playing at 1080p, but I have yet to find a reason to want more than that, and the performance is where I want it to be, so I'm satisfied. Still, since I didn't trade hundreds of extra dollars for an i7 and a GTX 1080 I'm the noob.
I lowered game settings to reach 100fps on my 100Hz trinitron screen back in -98 too. Like everything it can be cultural, but to suggest it's only that is ridiculous. People still buy expensive functional clothes, bikes, and everything else. Just because you can work out in a tshirt from H&M on a $100 bike doesn't mean there's no practical value to go more expensive.
> Granted, I'm playing at 1080p, but I have yet to find a reason to want more than that, and the performance is where I want it to be, so I'm satisfied. Still, since I didn't trade hundreds of extra dollars for an i7 and a GTX 1080 I'm the noob.
So what you're saying is that you find a value in playing at 144Hz over 60Hz, but object to people finding value in playing at a higher resolution? Someone else might argue that you paying to play at 144Hz is only cultural, while you know it's not.
>object to people finding value in playing at a higher resolution
Except I didn't object to anything. Me calling the wants and needs of the high performance PC culture "cultural" is not me being dismissive of what they happen to enjoy. Buy whatever you want. My whole point was that you can still have a professional quality (as in eSports level) experience with equipment that's not super-ultra-top-of-the-line-fantastic, saving you the super-ultra-top-of-the-line-fantastic premiums.
> My whole point was that you can still have a professional quality (as in eSports level) experience with equipment that's not super-ultra-top-of-the-line-fantastic, saving you the super-ultra-top-of-the-line-fantastic premiums.
I'm not sure I understand. In this area (as opposed to functional clothing etc) it's very easy to measure real differences between a GTX 1070 and a 1080, and seeing pixels on more than 1080p when the screen is 27" and up.
That you can get a rags-to-riches story in eSports learning to play on a 5 year old computer is very true, but it doesn't detract from the fact that it's easier with better hardware.
If you don't feel it's worth it it's not worth it for you, but the difference is very real.
I'm a potential customer with enough money to buy basically whatever I want, but nothing about a resolution greater than 1080p has been enticing to me so far (and I own a 4K TV that my feelings are generally "meh" about).
I'm satisfied, so really it's not my problem; it's the problem of the hardware manufacturers and game developers to make better products and do a better job of convincing me that a higher resolution is something I need, and so far they haven't.
Have you tried watch 4k content on Netflix? The difference is not subtle. It's not quite the same leap as DVD->Bluray, but it's not that far off. It's not so much more detail as everything seeming more in focus.
I've watched some of Amazon's 4K content and wasn't particularly blown away. In fact, I had to pause the Fire TV a few times just to see whether or not the stream was still in 4K, because if your network traffic slows down it will revert back to 1080. It wasn't noticeable enough for me on a 50" screen at about 8 feet away.
Have you verified that the HDMI port you're using supports 4K/the nessesary HDCP? I stream netflix directly on my smart TV (as well as youtune 4k), and like I said, the difference is immediately obvious. I suspect you're getting 4k downscaled to 1080p.
It's definitely working -- in fact, the FireTV won't even display the "4K Ultra HD" categories unless it's connected correctly to a compatible TV[0]. I notice the difference if I actively try to, I just don't think it's as mind blowing as everyone is claiming it to be. It's certainly not something I couldn't live without.
OK. I've got a PC with gtx970 I occasionally use for gaming, mostly Battlefront (at 1080p, haven't tried 4k). It's pretty good, wouldn't say it blows my mind like the HTC Vive demos I've tried though
I have a 980 Ti and it can't always get 60fps at 4K. So for me it's not just shelling out for the biggest card, I'm trying to get the best out of my hardware to make for a better experience.
Sure, but gamers would buy the Ti because extra speed is more important than a bit of RAM for gaming. Even machine learning people who need lots of RAM will probably not decide to spend $500 on the Titan XP just for 1 GB.
This might be the first sub $1000 card to drive games at 4K above 60fps, so that's kind of cool. Considering that there are a few 4K 144Hz IPS monitors releasing this year the card lines up nicely.
I believe there is a subset of gamers who spend this kind of money for cultural reasons. But I build expensive beasts once and drive them until they run into the ground. My dual 690s handled "everything I could throw at them"[0] for years and years. I only just upgraded to dual 1080 FTW editions (also very expensive) and expect to run those for quite a few years as well.[1]
[0] Caveat: I only ever run a single monitor on my gaming rigs.
[1] Turns out I get VR sickness something fierce, so VR won't really be a driving need for me.
Personally, I play at 3440x1440. I want 0 bottlenecks and more power than I need, so a 1080Ti is perfect for me. Especially as in the UK I can get it for ~£30 more than I paid for the 1080.
Titan X seemed to fit between the Tesla's used in business applications and the 10x series used for consumer gaming - prosumers doing video and rendering on workstations.
I'm glad NVidia chose to canibalize that market rather than artificially constrain the consumer series.
Wasn't it Intel that did this with the Celeron's at some point to preserve the higher-end workstation market for performance? iirc they underclocked some processors that outside of L1/L2 cache weren't so different to the much more expensive Pentium 4/Xeon or whatever.
How does this stack up against the Titan X for ML applications? Is the extra 1gb or ram critical or will this be a comparable (or even greater) offering?
The difference between 11gb and 12gb isn't meaningful for most users of ML.
Yes, there are problems where squeezing in a slightly larger model than 11gb would give slightly better results; but that size of problem also would generally require a lot of computing power, at that point you're not comparing specs of a single card but benchmarks and scaling for clusters of multi-gpu machines.
Not in SLI, no. SLI is a technology to have two GPUs act as one, by syncing among themselves. When you have two GPUs in SLI, your CPU program will generally see them as a single GPU with (almost) twice as many cores and twice as much bandwidth, but the same amount of memory.
For this to work, the GPUs need to be very similar (i.e. you can't SLI a TitanX and a 1080).
You can, however, put many[1] different GPUs in the same system, and then the onus is on the CPU program to properly manage and send work to each one separately. This is not SLI.
Generally, SLI is useful for games where the program assumes "a GPU" and sends work to it, and you want to have more performance than your single card can provide. For compute/ML, you're much better off just using each card directly.
[1]: Don't quote me on this, but I believe up to 8x is supported, above that is uncharted territory.
Sorry for asking, but you might have a better insight. I have two questions, which you might answer:
- Is it possible, officially supported way, to have both GTX and Quadro at the same time in the machine? I prefer GTX for rendering speed (Redshift renderer), but I need 10-bit output Quadro has which GTX doesn't have.
- Are there GTX (1080 TI in this case) cards out there which don't take double PCI space? I don't need multiple outputs and I can hook them up to the water block, if needed. I need as many cards in a single workstation case as possible for rendering. I've seen people saw/solder off stuff from cards in order to get at that, but not something I would do personally. Ideally I would prefer to have eight 1080 TIs in a single machine along with a single (modest) Quadro for 10-bit output to three monitors - are there even motherboards that could support that?
Note that I'm looking for workstation / deskside solution, not server 747-fan-sound-behemoths.
You can use PCI risers to give yourself some more flexibility as to how your cards are plugged in. However, please make sure you have enough PCI x16 or x8 slots. If you're not running a server motherboard, I highly doubt you have enough PCI lanes to fit more than two or three cards.
Is that a recent development? One of Quadro's feature over GTX was/is 10-bit output. No way around it if you're using a 10-bit monitor like EIZO ColorEdge or whatever.
My GTX on Windows also claims to output 10 bits. From some playing around in Photoshop, it seems like the monitor I have does not actually use the extra data. (It asks for it, but either doesn't receive it, or doesn't process it properly.)
"You can, however, put many[1] different GPUs in the same system, and then the onus is on the CPU program to properly manage and send work to each one separately. This is not SLI."
Interesting that you mention this as I've been searching for an answer before I build a ML rig and embark on learning it. Is there any benefit for ML performance if I put a GTX 970 and a 1080[ti|x] in the same machine? Just trying to figure out how to re-use spare parts.
SLI is only for realtime rendering with Direct3D and OpenGL. Tesla GPU products are intended primarily for CUDA and OpenCL use, which have always required the application to be aware of the underlying hardware configuration. There's no need to pretend that the 16 GPUs are one super-powerful accelerator device.
They're treated as independent compute units and the host program load-balances compute work between them. You can even do P2P memory copy or access between your CUDA devices without going through the CPU. See https://www.nvidia.com/docs/IO/116711/sc11-multi-gpu.pdf for a high-level overview.
Some of the high end Xeon systems have gobs of PCIe lanes. If you have a four CPU socket system, you can hang four x16 PCIe cards off of each processor.
Both have max 32 PCI Express lanes - so you can theoretically hang max 2 PCIx x16 cards off each CPU, but as few of the lanes is reserved for other peripherals, in 4GPU/CPU setup they will practically run at lower speeds, ie 8/8/4/4. However, using system with PCIe root complex [1] can improve the GPU<->GPU communication speed.
I understand there was a recent shift away from such practises at nvidia, but am I correct in saying that the 3-way sli bridges do still work with the newest cards?
Is it possible and would it make sense to use an SLI Bridge without using SLI to boost the cross-card data transfer rates when doing compute workloads?
When looking at the advancements in GPUs these days, where are the improvements coming from? Are they just throwing more silicon at the problem? Is it process improvements? Smarter design? All of the above? Can we expect these advancements (and cost reductions) to continue, or how far away are we from the wall that CPUs have hit?
Something nice about GPUs is that graphics rasterization is a very parallel problem. If you can handle thermals, power consumption, memory bandwidth, etc. that go with it, you can pretty much just add more of the parts that do the processing (in this case, CUDA cores - 3,584 of them) and get a more powerful GPU. A 1080 TI has 12 billion transistors. The 1080 had 7.2 billion. The 980 had 5.2 billion.
CPUs handle more branching problems, so it's harder to get improvements by just adding cores.
They're also smaller than the pre-1080 - 16nm down from 28nm, which lowers power consumption and therefore heat. It's a huge help and it's part of what allows them to just add more transistors in the first place - but in this area, GPUs will run into the same problems CPUs do. It's hard to shrink transistors.
I appreciate they wrote it the way they did. I didn't know what the abbreviation would have meant, and now I do. Also, I now have a clue what "IANA" at the start of an abbreviation I have never seen before might mean.
I was informed by a long time user (Grumpy Mike) of the Arduino forums when I first started posting there that one should -always- explain the first usage of an acronym or similar construct before using it elsewhere in the rest of a comment.
It makes sense from the perspective of the fact that not all users may understand what the acronym means at first, and doing this is a great courtesy (also, it removes ambiguity in instances where you may be using an acronym similar or same as that for another domain - perhaps the domain of the forum or medium you are communicating on).
>When looking at the advancements in GPUs these days, where are the improvements coming from?
A considerable chunk of it is process improvements. Take a look at the chart at http://wccftech.com/nvidia-pascal-gpu-analysis/ that's about halfway down. nVidia's current generation GPUs are at 16nm. For comparison, Intel is at 14nm for their processors (though their 14nm process is better than most other chipmakers 14nm process) and hopes to start delivering 10nm this year, a year or so later than planned. So nVidia has a couple more process bumps to go, whenever they become affordable, and then they hit the wall like everyone else and have to focus mainly on architectural improvements for further performance gain.
In addition to that, nvidia has been riding on rising clockspeeds the past few years, the current Pascal architecture isn't much different than the previous Maxwell but clocks much higher, upto 2.1Ghz overclocked. Maxwell also clocked pretty high, going to 1.5Ghz overclocked.
CPUs are stuck at clocking 4Ghz and beyond, so graphics cards still have some headroom to grow into.
With that price (I had expected it to be higher), it looks like they're positioning themselves against Vega already. We all win!
I'm locked in with a GSync monitor, and can't wait to replace my 2x 980s with this one. I'll still wait for Vega to come out, hopefully putting downward pressure on the pricing.
I read you. My take is that a single 1080TI should suffice for your needs; 690 to 980 was a bit of a sidegrade, and the 1080TI being roughly 2x the 980 with many added benefits starting with the huge increase of RAM.
You'll probably wonder if your PC is still there, with noise levels likely to plummet. My 980 SLI certainly mounts a racket under gaming load.
This is one of those times I loath Intel's "new" (several years old now) naming scheme. I've a 6 core i7 extreme...but it was one of the first or second gens, so it's hard to qualify it against current gen. I've also 64GB RAM, and seldom get near 50% usage w/o VMs running. IO seems reasonable, and yet streaming video stutters a lot, with say Civ6 on the primary monitor and streaming live TV on a secondary. I didn't use to have a problem with Civ5 & streaming live TV (i.e. TV stream was clean and Civ5 performed as crappy as Civ5 does), and when I look at task manager, the CPU is not fully tasked, so I presume it may be a GPU issue (anyone know of a taskmon like app for GPUs?).
Since I'm on the fast insider ring, I've tried both with and without game mode turned on (which supposedly now works), and both cause the TV stream to stutter. I don't even recall having this issue before they even announce game mode, but I can't be sure when I first had the problem, because I was also having physical internet issues in same time frame (thanks, Comcast).
I do a lot of 3d rendering with Blender Cycles so I'm really happy to see the bump in cuda cores. The gtx 1080 didn't offer any significant performance boost over the 980 when it comes to rendering in blender. Right now for Cycles you get the most performance per dollar by buying multiple 1060s (blender doesn't rely on sli when rendering with multiple gpus). I've got my fingers crossed that this card will offer some tangible performance boosts.
I haven't used multiple gpus with different versions, but everything I've read suggests it should work. If one gpu is significantly slower than the other you might run into situations where the slower gpu gets stuck working on the last tile while the faster gpu sits idle, making your total render time go up. I imagine this would be pretty rare, and a little effort to modify the tile size would mitigate any issues.
As far as rendering from the command line with both the cpu and gpu its definitely possible albeit a little hackish. Rendering from the command line uses the cpu by default. To render using the gpu you have to pass a python script as an argument that changes the render device using the blender python api. Blender supports multiple gpus out of the box, so there is no reason to split them up into separate jobs (even different model gpus that don't support sli). You'd only need one job for the cpu and one for the gpu(s). The tricky part is making sure the cpu and gpu work on different things. For animations you'd probably want to change the render step option. Setting it to 2 would make blender render every other frame, so the cpu would work on the even number frames and the gpu would work on the odd frames. For single frame renders you could set the cycles seed value for both devices and then mix the two generated images together. Both the seed value and the step option can be set in the python script which means its pretty easy to automate the entire process. It definitly not trivial to get working so at some point you need to decide if the 0.1x speed bump from adding the cpu is worth the effort. Any new nvidia gpu is going to be worlds faster than whatever cpu you might be using.
See here for instructions on stacking cycles renders with different seed values:
> Rendering from the command line uses the cpu by default.
I don't believe this is still true. When I render from the command line, it will use the GPU if my user preferences are set to GPU. (confirmed by render timings)
Oh nice, I'm really happy to hear that because passing in a python script is a pain. I don't remember seeing it in the release notes. Maybe the developers didn't consider it a big enough change to warrant writing down.
fp32 is almost never "gimped," since that's the basis of performance for most video games and 3D applications.
perhaps you meant fp64? That sees more use in industrial apps (e.g. oil and gas exploration) and so is often slower on consumer cards, and historically was artificially limited. For recent GPUs, it's not quite fair to call fp64 gimped, since the fp64 equipped cards are completely different designs - the lack of fp64 on this card is a design choice, not an artificial restriction for product segmentation.
I'm not sure what the original post is referring to but many early GPUs were not fully IEEE FP32 compliant, where many operations (particularly trig functions if I recall) would have 24 or fewer bits, either not supporting FP32 or requiring many cycles to compute. This article seems to describe some of the Radeon cards: https://en.m.wikipedia.org/wiki/Minifloat
The Radeon cards mentioned in that article are really old ones that predate modern unified shader architectures, DirectX 10, and compute support. As far as I know all non-mobile-phone GPUs released in the last decade or so support FP32 precision, though not necessarily with full IEEE compliance.
It's fp16 that they neglect on the consumer cards. On the Tesla p1xx variants they now have full double-speed fp16, which is a great feature for machine/deep learning users. Leaving this out of the consumer cards is certainly annoying.
Is a 4K tech mature enough to use it every day for desktop and software development? I've read, not so long ago, that Windows still have issues and plenty of apps looks just ugly. Also I read that there might be an issue to keep your old HD along with new 4K panel - some people experienced problems with such setup. Unfortunately, I've had no opportunity to test 4K so far. Also I have no idea how does it looks like with Linux systems.
I'm a software dev and use a 32" 4k monitor (3840x2160) at home. This is a 16:9 monitor (from BenQ). Unless you have 20/15 vision, you'll likely need to scale it. Windows 10 has built in scaling at 25% increments, and you can also do custom scaling. I scaled it to 115% and it works pretty well.
I also have a secondary monitor hooked up, but it's a 2560x1440 monitor turned vertically. It works fine, but I forget if font scaling is applied across the board or just to each monitor.
The ideal 4k desktop monitor size is likely at least 36-38", but I'm not sure if those are economical yet or not.
At work I have a Dell 34" Ultrawide (3440x1440). Not quite 4k, but no scaling is needed, and it's a great monitor. The one downside is that it's just got the vertical height of a 2560x1440 monitor, so I kind of miss the vertical height of my home monitor at times, but I can easily have three or four files side by side in my IDE.
I also have an Ubuntu system at home hooked up to the 4k monitor at home and it works, but you'll possibly need to adjust font sizing in your apps. I haven't spent a ton of time recently with this system though.
For all of these monitors you want a video card with Displayport 1.2. You do not want to use HDMI because you will likely end up at 30hz and that is a horrid experience. HDMI 2.0 supports higher refresh rates, but having both a HDMI 2.0 port and monitor is pretty rare.
Anyways, probably too much info, but it was either do this or work on an annoying bug :)
Just wanted to provide a counter point - I also have a 32" 4k display but find the default text size easy on the eyes without scaling. I do sit pretty close to the monitor though, about half an arms length away (elbow to fingertips).
I move my head around a lot more to focus on different parts of the screen. But I like that experience.
My vision isn't great - I'm near sighted with 20/200 vision. I can use the screen comfortably at the default scale with glasses, or with a small bump in text size without them.
I'm using OSX, with Atom & iTerm2 mostly. So that may have different font rendering than Windows.
I've used 4k for many years. On Linux it was easy; I didn't use any GUI programs other than Chrome, so it was just a matter of changing font sizes. (HiDPI didn't work for a long time in Chrome, but changing the page zoom worked fine. That is long-fixed, though.)
On Windows, things have gotten better. I have two monitors of different physical sizes, one is at 125% scaling and the other is at 225% scaling. Windows resize as you move them between the monitors. Some old apps look like garbage, but it's really an issue of Windows developers wanting their app to look "special" instead of using standard components. The apps most affected are mostly useless, like tools for changing the LED color on your motherboard. Everything important works fine; web browsers, Windows explorer, Photoshop and friends, etc.
On a 24" 4k monitor, fonts look just amazing. It is spectacular how nice the shapes are and how smooth everything is. On a 32" monitor, it's business as usual, really. Not enough pixels to feel different from 100dpi that we've had for decades.
I use a 4K Alienware laptop every day with two Full HD monitors attached to it through the official Alienware dock. I'm using Windows 10 and due to internal company policies, I'm not using the anniversary edition which supposedly brought better environment for multi high DPI monitors support.
I can say that it is pretty usable, but whenever moving windows between monitors you have to fully move them otherwise you will get weird results. Another caveat is that not all apps support 4k so I get really small icons in some applications such as greenshot.
I'm using a mid 2015 MacBook Pro with a 4K display for development, and it works pretty well. Expose hitches a bit sometimes, but that's the only trouble I've had.
YMMV if you're using it to display something other than a terminal or browser though, and I haven't tried using this display with Windows or Linux.
Given this new model will push down prices of exiting models, what would be the cheapest currently available NVIDIA that includes a Display Port output and a resolution of 3440x1440? This is just for work (editing documents, programming, etc.), not gaming.
Why not get a modern (Haswell or later) Intel CPU and a motherboard with DisplayPort output? You don't need a discrete GPU unless you're MLing or gaming, and you can splurge a bit and get something with DDR4 and/or NVMe. Having faster IO is the best way to make "productivity" software like spreadsheets and editors faster.
== gratuitous shilling below ==
For Christmas I got a new CPU/motherboard combo with a Samsung 960 EVO SSD, and it's ludicrously fast, decently cheap, and a noticeable improvement over my old (heh) SATA SSD.
Thanks for the suggestion. It is something I had considered; to replace the motherboard and the CPU with the new AMD Ryzen. But if I remember correctly, it is difficult to find motherboards that include video output (especially through Display Port). So I would have, as you mention, go with Intel, which seems a missed opportunity given the new AMD processors?
Yea, most cards from the last generation should really be able to just push it for work related tasks. They could even go to the last generation if they wanted cheaper than a 1050.
With HDMI, the problem is the chip itself, as until Kaby Lake Intel hasn't supported HDMI 2.0 in their CPUs. Sufficiently new DisplayPort has been supported a while, though, but you'd need a motherboard supporting it, as you say.
I use Mint, where I have always been using the proprietary NVIDIA drivers. I remember Radeon drivers used be pretty bad. But that was a few years ago. Has the situation changed? I am open to suggestions. Thanks.
AMD doesn't have a closed kernel component anymore. They do still have a userspace component, but most people use the free 3d acceleration instead, written with NDA-free documentation provided by AMD, with the help of AMD employees. This covers hardware up to their current generation, providing OpenGL 4.5 and Vulkan. Its performance is not as good as the Windows drivers, but good enough enough for most games out there.
Thanks to this, their hardware just works out of the box on any distribution that's not years behind.
As you don't even need to play games, I believe AMD is the better option here.
I just switched from AMD to NVidia because of driver issue. I run Arch Linux, and my experience is that installing NVidia drivers is miles ahead of AMD.
I run Arch and amdgpu was installed as a kernel driver out of the box. If you have an RX GPU, or any other one supported by the new drivers, you're going to have a great time.
I have experience with NVIDIA and AMD GPUs on Linux-systems and my experience with AMD was vastly superior. The difference was so significant that I rule out buying a NVIDIA card.
Yeah, I just thought EVGA's recent release of their FTW2 and SC2 with the advanced temp monitoring stuff was a signal that it was going to be a while longer. Guess I read those tea leaves wrong.
with the AMD Ryzen CPU release tomorrow, you can now pair a lastest generation 8C/16T CPU with a 1080 for $1000, a combination that a week ago was about $1700. The Ryzen boards should be cheaper too, bringing the total system price for a configuration like this down a lot. Pretty amazing.
Sort of, the intel 8c/16T is crazy expensive because it's a server chip with 4 memory busses, ecc support, etc. $1000 CPUs don't make sense for gamers. The fastest "normal" i7 is has plenty of cores and better single thread performance than the $1,000 intel chip. Of course AMD wants to compare to the $1k chip, not the more competitive $350 chip.
The Ryzen is a desktop chip, no ecc, half the memory busses, good at cache friendly stuff. No faster than last years CPUs on anything memory intensive.
Of course AMD's going to cherry pick the cache friendly benchmarks to brag about.
The most recent AAA titles often favour more cores, as also the consoles are 8C nowadays [1]
And in case you do more than gaming, i'd always take the 8C/16T over a 4C/8T for the same amount of money, even if it's 10-15% slower in single threaded workloads.
Of course AMD compares to the competing 8/16 chip, because for enthusiasts its just a much better deal. There are a lot of enthusiast gamers that do streaming or video editing for example.
Mainstream is probably better of with waiting for the 4C/8T Ryzen models and pairing that with a RX 480 or mid range Vega.
> The most recent AAA titles often favour more cores, as also the consoles are 8C nowadays [1] And in case you do more than gaming, i'd always take the 8C/16T over a 4C/8T for the same amount of money, even if it's 10-15% slower in single threaded workloads.
Really ? That contradicts everything I've ever heard and experienced myself.
At least until really recently, games where still massively dependent on the single threaded speed since all the graphic stuff was single threaded (because of DX<12 or OpenGL). Did this already change thanks to the newest graphic API (DX12 / Vulkan) ? Last time I checked, the adoption was still pretty low, it's really good news if the landscape is changing quickly. Or is it something else ?
We do not yet know. Gigabyte specifically says ECC memory is compatible, but runs without ECC on their boards. Asrock only says that ECC ram is compatible.
It requires that the memory controller support ECC and that there are extra traces from the memory controller to the RAM.
This used to mean it was solely a motherboard feature, but starting about a decade ago, the memory controller was moved onto the CPU, so now it requires both CPU and motherboard support.
buying a Nvidia card is almost shooting oneself on their foot.
If all you want it for is gaming. If you also want to play around with machine learning or other GPGPU applications, then getting anything other than Nvidia is a bad idea, since that is what everyone uses and supports at the moment.
It's not just machine learning, 3D rendering is moving over to GPU as well and there is next to no support for anything other than CUDA in that space today.
Not everything starts and ends with gaming in the high end computer space.
Yeah, competition is nice. We can now choose to buy a 8 core Intel 6900K without a GPU, or buy a 8 core Ryzen R7 1700X together with a Nvidia GTX 1080 Ti for the same price.
While this used to be true (and mostly still is), the tide is shifting on this point. Games are using more and more cores. Overwatch, for example, uses 6. I believe DirectX 12 makes it easy/reasonable to use 4 cores, with some benefit to be gained with up to 6.
Not that it really helps with some games. Dota 2 is CPU bound for the moment, so if you play that a lot then maximizing single-threaded performance is probably the way to go.
Still, if I was buying a CPU today, I'd be cautious about going for less than quad core if I was interested in new AAA games.
Still, there's diminishing returns on more cores for such uses. You can use 2 cores, might use 4 cores, having 6-8 cores is probably just cause excess heat that could be instead spent to run 4 cores at higher frequency.
This is how Intel should have dealt with Ryzen; instead of dismissing the competition. Smart move from NVIDIA though I'm still going to be keeping an eyeball on Vega.
Full motion, 360 degree doesn't change the card requirements, and you can do two cards if necessary, so the only real barrier is 4k rendering.
Apparently a 1060 can already render a game like overwatch in 4k, so I think we're already there. You just need to find someone that will take 4k screens out of some phones and put them in a headset.
"You just need to find someone that will take 4k screens out of some phones and put them in a headset."
Ha! With what cable? 4K * 2 * 90hz blows way past display port (even DP1.3). If it was just a matter of gluing two android phones together it would be on shelves already...
It may be hard in an objective sense, but it already exists. It's not the hard part of making a high-resolution VR headset because you can buy chips that do it for you.
A single displayport 1.3 cable, supported by all recent GPUs, can push 4K at over 120Hz. Screens that use such data rates already exist too.
Throughput is not the issue with VR, but latency; it's the time from input to final render, not the amount of time it takes to render a frame that is important (though obviously the latter is a lower-bound on the former).
Most of what I've read claims that games and applications need to explicitly add support for dual/multiple GPU, as it's not something you get out of the box. And very few games do.
Anecdotally, this seemed to hold true for me, as I initially had dual 1080s in SLI and moved one of them to a second machine with no noticeable difference.
Nope. Fairly low utilization on most of the 8 cores. It's only halving the GPU resources if all of the GPU resources are being used. And that's kind of the point.. adding more GPU won't help for most VR because the games simply won't use it.
So SLI automatically bridges the two+ GPUs into a single logical card, with twice everything except RAM. Apps don't have to explicitly support it; they see a GPU with twice (or more) the number of texture units, cores, etc. as the underlying SLI'd card has.
If your VR doesn't scale up its performance with resources, perhaps it's artificially throttling itself to a max framerate? Can't push pixels fast enough to the headset? waiting for sensor data from the headset before rendering?
I'm not sure, really. I've searched on it a bit and don't see a lot of good sources on explicitly why it doesn't work well, but I see the vast majority of user opinion is the same.
This is a bit anecdotal, and I don't have expertise in the underlying technology, but I read mentions of requiring use of specific driver functionality to get better performance out of an SLI or dual GPU setup, via mechanisms like using a dedicated GPU to render each eye independently from each other instead of attempting to render both images together on a bridged virtual card. Allegedly, it's a bit of extra effort to do this (as they'd need to support both the AMD LiquidVR and Nvidia VRWorks) and would only benefit a minimal audience, so other aspects of the game get prioritized for development effort.
There are a couple of implementations that do. eg, Nvidia Funhouse VR was a showcase app of what implementing their SDK could do and they used it. I think Serious Sam VR used it, but I haven't tried that game..
If all you want is 60FPS, current cards work well enough, though 60FPS has too much latency for VR in my opinion.
Ever since I got a 144hz monitor, my 980ti struggles to do 144hz consistently at 2k. Even though I'm not doing VR, the extra frames are very noticeable, and its hard to go back to 60hz displays for things in 3D.
For 4K VR, you don't need to render the entire scene in 4K. Only a few degrees of the center of vision really needs to be rendered in 4K, and the image can be at a much lower definition the farther you get to the periphery (foveated rendering).
Future headsets can use internal cameras to track the position of the pupils to figure out where that sweet spot needs to be every frame.
I'm not a VR developer, but my impression from reading about this (or maybe listening to some John Carmack talks) was the opposite: that VR was actually demanded much more from graphics cards than non-VR.
This was (going from memory here) due both to the much higher refresh rate that's required, to having to pre-render not only what the user sees but the whole 360 degree sphere they might look to around them at any moment, and perhaps some other VR-specific requirements that don't spring to mind at the moment.
Was I dreaming that I read this? Maybe a VR developer or someone more knowledgeable than I could comment?
Rendering for VR currently is more intensive due to the high frame rate and double-rendering (once for each eye).However no one renders a 360 sphere and single-pass stereo rendering is already possible which does the cpu-side rendering for both eyes together. [1] Still, at present, VR rendering is way more intensive than regular rendering and it hasn't yet benefited from decades of optimization work like regular rendering.
The parent post is probably referring to foveated rendering - a research technique which uses eye-tracking to render most of the screen in low quality and areas being looked at in higher quality. (Real vision is perceived something like this - try reading text that isnt in the center of your gaze.) You can do foveated rendering better for a close screen so eventually it's possible that the quality of VR rendering will surpass regular rendering.
The GPU is faster than a human can move. You don't need to render all around.
Even better, if you render just a couple degrees extra you can double your framerate via interpolation. That way you can have 120Hz head-tracking while only having to render the world at 60Hz.
For proper quality in VR MSAA (and not other anti-aliasing methods) is highly recommended, which already makes this 4k. Not really, because MSAA isn't the same as actually rendering in higher resolution, but close. Or, "close" depending on your shaders and render pipeline configuration...
And by the way, about render pipeline: MSAA means you'll be using forward rendering, not deferred, which also limits you in terms of amount of light sources, for example (but makes transparent stuff much easier). But wait - may be this 1080Ti will allow us to implement MSAA on deferred renderers, finally giving us VR folks all their goodies, like HDR? Well, no - unless it becomes a minimum requirement, nobody's going to support two rendering paths in their games.
Also, you mention 360 degree - but who actually needs that? We only need to render what the player actually sees - and if we finally get stable eye direction detection, it means we will be able to decrease the angle even further, rendering all outside the focus zone in a much decreased resolution. The same goes for "per eye" as well: another usual optimization tactic is to render everything that's far away enough from a single camera, saving fill rate.
Oh, and by the way - we still didn't talk about what texture resolution do we want, what shader features do we want to use, what amount of objects in the scene and in camera, amount of triangles... But we would need to create a series of prototypes (gameplay and graphics at least) before answering these questions anyway.
How close are we to being able to render a high resolution monitor / TV inside VR? Not necessarily a 4K screen, but like a good 2560 x 1440 display? How close are we to high resolution virtual cinema experience?
... or are we there? I really have no idea how resolutions in VR work.
VR headsets don't have that level of resolution yet. There will still be visible pixels for years to come. I believe it was either Carmack or Gabe Newell who said you'd need nearly 50k vertical pixels for true photo realism, I may be mistaken though.
If you want to render at 2560x1440 inside of VR and not lose quality the VR resolution has to be bigger per eye than display resolution. You would need a VR headset that has 4k per eye rather than a better GPU.
How hard is it for games to ignore SLI entirely and simply use a second or third GPU to offload things like post-processing effects ?
Or even better, how about if you had 3 1080 Ti's and 3 monitors, could the application/game just easily assign one GPU per monitor without having to resort to using SLI (which has such a bad reputation)? This would make things so simple, want an extra monitor or two? Just add a GPU to power them. I cant imagine that the coding for something like this would be anywhere near as complicated as SLI/Crossfire.
I think you can do that using some virtualization software. Really depends on the graphic card drivers and game engines to expose that ability to the actual game programmers.
See the Mirror's Edge and The Division benchmarks. Averages are (just) over 60, but there are dips below.
Personally, I think the best resolution for current-gen cards is 3440x1440. Should be rock solid 60+ fps in all games, and gives the benefit of being ultrawide.
I haven't owned a gaming PC in close to 10 years and todays performance numbers look completely crazy to me. The last time I played a game I was happy that I could get 1024x768 smoothly with high framerates. Seeing that you can play 3440x1440 with 60fps feels like magic.
I strongly disagree that being ultrawide is an advantage. 16:9 is already wider than optimal for gaming. The human field of view is actually about 4:3. While there is admittedly usually more interesting content to the sides than the vertical edges, I've found 16:10 to be preferable for immersion and would never dream of going wider. If anything, I'd go closer to a square if they still made them.
I'd rather have 2560x1600 than 3440x1440, even though it's less pixels.
IMAX film format has a more immersive aspect ratio too, which is taller still at about 16:11.
I have an ultrawide not necessarily to be able to perceive all my content at once but that I don't need to have dual monitors anymore for putting two documents next to each other without compromising the width of each too much. Even still, I came from a 27" 2560x1440 monitor and the edges are still of value to me in peripheral vision in games. Add in that most 34" ultrawide screens now seem to have a curve to them and it makes visibility at the edges easier as well. Not having to setup an extra monitor and suffer the bezel in the middle is very much worth the troubles for me because otherwise I'd need 3x monitors and at that point it gets insane with multiple monitors in portrait and such.
Try a 4K TV. I bought a card with 4K HDMI2.0 (for 60 fps) out, plugged it into my 46" TV and never looked back. The only drawback is needing to use the remote to turn it on/off. The monitor shuts off with DPMS but when the machine is off it still searches for signal. Doesn't bother me at all
Well, I guess it's down to personal preference, because I would always prefer ultra wide to normal 16:9 for gaming. It's just a much better experience in every way, and going back to 4:3 is just miserable. Again, ymmv.
You're correct that we can only make out fine details in a narrow field of view, but that doesn't mean there isn't value to having more in our peripheral vision. Having extra horizontal width specifically is nice, because of how there is naturally more interesting stuff going on in that range; above and below you is just skybox and ground (for many first/third person 3d games), which you don't really need to see more of.
There is a "regulatory" issue: some competitive games set or used to set a hardwired limit on the vertical FOV, and scale (or scaled) the maximum horizontal FOV to match the aspect ratio of the monitor (or multi-monitor setup).
No, we don't. Latest GPU architectures including Vega (and Pascal obv) support rendering the scene once and then projecting it from two viewports thereby generating two scenes without having to render the entire scene twice.
Surely some of the work should be possible to re-use? I mean for most pixels beyond a certain depth the incident eye vector direction will be identical for all practical purposes, so if one could just fudge it and use the same calculated pixel color for both eyes and just offset it slightly then it should be usable without having to be calculated twice. No one would notice if the reflections or specular lobe for the right eye were calculated with the indicent camera of the left eye.
Once you have calculated the pixels for the left eye, those should be possible to re-use for the right eye, with some mapping. Certain pixels that are only visible to the right eye will have to be computed. I'm not sure if it's possible or if it even has a chance to be a performance gain (or indeed if this is actually how it already works). Doing the full job of 2x4k pixels for two eyes when they are a) almost identical for objects beyond a certain distance and b) quality is almost irrelevant for most pixels where the user isn't looking.
With foveal rendering and some shortcuts it should be possible to go faster with a 2x4k VR setup than for a regular 4k screen when you need to render every pixel perfectly because you don't know what's important/where the user is looking. Obviously one needs working eye tracking etc. first too...
I agree with you. There's no reason to run every pixel shader twice in full.
It seems logical that each surface/polygon could be rendered once, for the eye that can see the most of it (a left facing surface for the left eye, a right facing surface for the right eye), then squashed to fit the correct view for the other eye. Then, fill in all the blanks. Of course, the real algorithm would be more complicated than this, but it seems like at least some rendering could be saved this way.
Technically the lighting won't be right, but you don't have to use it for every polygon, and real-time 3D rendering is already all about making it 'good enough' to trick the human visual system, not to be mathematically accurate. If technically-accurate was what we insisted on, games would be 100x100 pixels at 15FPS as we'd insist on using photon mapping.
If we do eye tracking we can probably lower that to 1024x786 equivalent rendering, by using high resolution where the eye is looking and tapering off to just a blurry mess further away. You can even completely leave out the pixels at the optic nerve blind spot. The person with the headset won't be able to tell they aren't getting full 4k or even higher resolution. And we can run better effects, more anti-aliasing, maybe even raytracing in real-time.
If this is the nvidia/smi research you are referring too, well it seems nice but without details, specifically dynamic performance, and there is reason to be sceptical of how good it is.
The field of view of current consumer HMDs is too narrow for there to be a big saving compared to the downside. As you move to larger FOV displays the brain will start doing more saccades (rapid step changes in viewpoint[1]) and the response time of the image generator and eye tracker is too slow to generate more pixels at the right spot. It's much more effective to just render the whole thing at the maximum possible resolution. There has been promising research on rendering at a reduced update rate or reduced geometry in low interest areas of the scene[2].
this card can do 4K pretty comfortably now, at least if 60fps is the benchmark (VR needs more) but 8K is still pretty far away. Other than VR, there is hardly a use case for it though, and even in 4K VR should be a lot better than it is now.
AMD demonstrated Vega running Doom 4 with an FPS counter a while ago, and it was running about 10% faster than a stock GTX1080 (non-ti) or on par with an overclocked GTX1080.
Doom 4 performs unusually well on AMD cards compared to other games, so on average Vega is probably similar to a stock GTX1080. Which leaves them with no answer to the 1080ti :(
I think Doom 4 performs so well because ID software usually has one of the most optimized engines (for everyone) and I believe that AMD cards have more raw power then Nvidia, but it's usually unused(I think due to unpopular APIs). Thus Nvidia is generally known to perform better in real world cases.
Which is why I said personally that is where they will fall. Just from looking at leaks and the same source of the RX Fury leaks it seems like that is where it will be.
Just speculation here, but last I looked into it, video cards were worthless when it comes to ROI compared to the specialized mining hardware used nowadays, which is continuing to advance.
Anyone else getting that the livestream (or I guess now the recording of the presentation) is choppy and skips back and forth between segments different segments of the presentation?
Edit: According to Anandtech, "NVIDIA has been surprisingly candid in admitting that unless compute customers need the last 1GB of VRAM offered by the Titan, they’re likely going to buy the GTX 1080 Ti instead."
I would suggest changing the article link to Anandtech, as it's really a much more informative and better written article. Jen-Hsun Huang seems to have bottled his cringe-inducing presentation style and fed it to whoever wrote that PR piece on the Nvidia site. http://www.anandtech.com/show/11172/nvidia-unveils-geforce-g...