Hacker News new | past | comments | ask | show | jobs | submit login
NVidia will license their GPU cores to other hardware manufacturers (nvidia.com)
112 points by bashinator on June 19, 2013 | hide | past | favorite | 67 comments



This is the right strategy for Nvidia, especially in the mobile space, considering they'll also be the only ones with the full OpenGL 4.3 support, which means that if Kepler GPU's are popular, developers will be making some pretty amazing mobile games that will only be available on Kepler GPU's because they'll be the only ones to have OpenGL 4.3 for years to come.

I only wish this was Maxwell coming out next year, not Kepler, which was supposed to be in Tegra 4, but you know Nvidia and their delays...I also hope they don't screw this up by making the GPU very inefficient. If they do that, no one will want it, so this strategy will be irrelevant. They say it's very efficient, but over the years I've become a lot more skeptical about stuff Nvidia claims, at least in the mobile space.

If they can convince Samsung to use Kepler for Exynos 6, or Maxwell for Exynos 7, that would be a huge win for them. Samsung has been a little lost lately in terms of what GPU's to use in Exynos SoC's, so this may be Nvidia's window of opportunity, if they can prove their GPU is the best.


Well, NVidia are currently behind the competition in terms of OpenGL ES support, so they actually need to hope game developers don't decide to take advantage of the newest and greatest hardware features before they're ready. That would leave them without a horse in the race altogether - even their new Tegra 4 chip which hasn't launched yet can't support OpenGL ES 3.0.

Also, whether anyone licenses their hardware in the first place depends on how expensive it is and how much power it sucks down. Remember that we're talking about a full desktop GPU architecture here, the lowest power GPU that currently uses it consumes more power than an entire tablet. NVidia have screwed this up before, and given all the NRE costs of creating a new chip I can't see anyone going for this until they can test the actual power consumption in actual hardware.


> even their new Tegra 4 chip which hasn't launched yet can't support OpenGL ES 3.0.

To be fair, the Tegra 4 chip supports almost all OpenGL ES 3 features (via OpenGL extensions), but it lacks some checkbox items to be able to claim GLES3 conformance. Little things like numerical precision of the rendering pipeline, which will not have a giant impact on most game content but is required for conformance.

And as of today, nobody else has shipped a GLES3 device either.


I thought OpenGL ES was just a restricted subset of OpenGL, and old versions of it at that. It seems kind of weird for them to be behind if this is the case so maybe I don't understand this correctly.


Not exactly - it's more complicated than that (https://en.wikipedia.org/wiki/Opengl_es). GLES was originally a simplified spec based on the desktop API, but optimized for mobile and embedded devices. They have since leapfrogged each other and removed (fixed function pipeline) and added (shaders) various features at various times.

The latest versions of ES and GL are largely, but not completely compatible AFAIK, with the desktop having a superset of features.


According to https://en.wikipedia.org/wiki/OpenGL#OpenGL_4.3 OpenGL ES 3.0 is fully forward compatible to OpenGL 4.3.


Can you elaborate on why OpenGL 4.3 will be supported on Kepler GPUS only?


talking about mobile space, i believe OpenGLES is used in embedded platforms instead of OpenGL, which is a stripped down version.


Yes, that's what everyone has been using so far, and that's what everyone will keep using for at least several more years (they've just started adopting OpenGL ES 3.0, so they won't change soon).

However, Nvidia is actually going to use the full OpenGL 4.3 in the next Tegra, next year, since it's using the full (but probably more optimized) Kepler PC architecture, so it has support for OpenGL 4.3. Nvidia is going from OpenGL ES 2.0 straight to OpenGL 4.3. For comparison, not even Intel's Haswell has OpenGL 4.3 support (still stuck at 4.0 only). Tegra 5 will also support CUDA 5.0.

http://www.anandtech.com/show/6845/nvidia-updates-tegra-road...


Only because it's easier to support OpenGL ES. If a GPU is available that supports full OpenGL, that would give greater flexibility features to mobile developers.


Not many would use full OpenGL, which is optimized for CAD developers. The ES subset actually represents modern programming practices, with only a few extensions missing for the more recent features of the pipeline (hull shaders and such).


OpenGL ES 3.0 lacks geometry shader support, which means it's behind even DirectX 10 a bit. OpenGL 4.3 received full backwards compatibility with OpenGL ES 3.0, so developers can still only develop for OpenGL ES 3.0 if that's what they want. But depending on how popular Kepler GPU's get in mobile, and if it's worth their time, they can use extra features in their games for those devices.

I think OpenGL 4.3 support would be a lot more helpful in making cross-platform games that work pretty much the same (but on a different scale of performance, of course) on both PC's and tablets, such as Battlefields 3 (the demo seems recorded so the video quality is not great):

http://venturebeat.com/2013/04/11/nvidia-shows-off-stunning-...

I also wouldn't be surprised if Epic Games ports the (mobile) Unreal Engine 4 to mobile Kepler-based chips, first. EA seems to be doing that with Frostbite already:

http://www.phonearena.com/news/EAs-Battlefield-and-Need-for-...


I get it, but 4.3 seems to still include a lot of legacy stuff that have plagued OpenGL for awhile now. Pushing a new version of ES that was equivalent to Dx11 in the shaders supported might be better. Or perhaps they need to be at 4.3 to support CUDA?


I agree that pushing for OpenGL ES would probably be best, but that also means waiting 4 or more years before OpenGL ES 4.0 is probably going to be available, if the transition from 2.0 to 3.0 is any guide.

Plus, at least from Nvidia's point of view, which already had support for OpenGL 4.3 in Kepler from the desktop space, so they didn't have to do a lot of work to get it in mobile, this is a no-brainer, as it gives them some competitive advantage that probably won't be replicated very soon. I figure even if Imagination wanted to do it, it might take them another 2-3 years before they have it.


developers will be making some pretty amazing mobile games that will only be available on Kepler GPU's

No developer would do that unless the development was entirely sponsored by nvidia. There are almost a billion mobile devices out there, a substantial portion running OpenGL ES 2.0. There are a small number of tegra-optimized game variants that add absolutely trivial changes in return for some nvidia sponsorship dollars, but no one is going to bother with an exclusive for a single chipset.

Reading your post someone might think that nvidia has some clear advantage in mobile graphics and this is the big chance to make it big. They absolutely have no such thing, being seriously contested if not beaten by PowerVR and even ARM designs (of course nvidia does great things on the desktop, but having strength in 150W desktop GPUs does not translate into the same in sub-Watt GPUs). nvidia has had a lot of talk but a serious lack of actual delivery.


I wonder what this will mean for free drivers.

While nVidia hasn't been entirely uncooperative (they do participate in X.org upstream development to the degree that it affects their products), the only free driver they've written for the line of GPUs the to-be-licensed Kepler core is part of is an obfuscated, 2D-only token effort, and they have stayed away from participating in the Nouveau project to write a fully-featured free driver. Contrast that with Intel and AMD, where the free driver is the primary effort or one supported with documentation, code and manpower, respectively.

On the other hand, they've contracted Avionic Design to write a free 3D-enabled driver for the GPUs in their Tegra 3 SoCs. If future Tegra is based on a Kepler derivative (as indicated by the blog), and this prior commitment forces them to release a free driver for it as well, I wonder if one day this hypothetical free driver might have a shot at replacing their closed source codebase. Or they could join forces with Nouveau and finally put some of their resources behind it, similar to what AMD has done.


Contrasting with AMD, the AMD drivers are a complete mess. I have a card that was released about a year ago (HD7800) and I can't dual-screen on it with the AMD drivers (maximum total resolution is 1920x1920 - yes, that's not a typo). Regardless of effort, the graphics drivers AMD produce are terrible, and the free version doesn't deal well in the gap (pitcairn chipset). I would not be pointing at AMD as a role model in this regard.


I wasn't. I agree that the closed-source drivers AMD also still produce are friggin' terrible, and far worse than nVidia's closed-source drivers. AMD has been much better about supporting efforts to write free drivers than nVidia has, however. In terms of serving as role model I'd say Intel takes the lead by actually developing their sole drivers out in the open _and_ releasing hardware documentation so others outside the company can help out. Even Intel as had their missteps however (e.g. the infamous Poulsbo driver mess).


For sure, Intel is the golden standard. It's just that AMD drivers suck - proprietary or free - and nvidia's don't - proprietary or free. I've got a pitcairn chipset GPU (AMD, released about a year ago) and I cannot get dual-screen working with hardware acceleration. The free drivers can't enable it, and the proprietary drivers are limited to the very bizarre 1920x1920 maximum joined resolution. I've noticed plenty of others struggling with AMD at the moment, and these days it's "just choose nvidia". A complete switcheroo on several years ago when it was "just choose ATI" (well, if you wanted something more than intel, that is).


Offtopic, but:

> the proprietary drivers are limited to the very bizarre 1920x1920 maximum joined resolution

That's not true. I'm currently using 2 1080p screens on a HD7870. You just have to initially set it up using amdcccle or maybe aticonfig, or to edit xorg.conf manually, after which you can use RandR-based tools as normal. I dislike that the AMD drivers lag behind in software compatibility and that they do things in non-standard ways, but they provide all the functionality I'd expect.

Getting back to your problem, in amdcccle set up Display Manager > Multy-Display, or manually in xorg.conf, Section "Screen", SubSection "Display" needs to have an entry called 'Virtual', which looks something like this: Virtual 3840 1920.


Well, it is true in a sense because that's the limitation that they give you. I figured you could manually fight with the xorg.conf (which I did last time I had a non-intel card years ago), but by that stage, I'd already spent heaps of time fighting with the card, and it was a work machine and I wanted to get work done. I just wanted to dual-screen Cinnamon, and the free driver did that with software rendering... with my quad-core pegged to 70%. But I live in the console, so I don't notice it except when I open htop - it was deemed 'good enough'.

In any case, the AMD drivers, at the current time, require stupid shennanigans to go beyond basics and the nvidia ones don't. The domain knowledge you need is quite high, and the target shifts so rapidly that the search engines are filled with out-of-date cruft.

Thanks for the info, by the way - I do appreciate it, but there is already a replacement card coming...


Luckily there's an increasing number of use cases that no longer have a want for more than Intel - I've been impressed just what the HD 4000 in my new laptop can do. Sure, a nVidia GPU still kicks its butt pretty thoroughly, but there's a lot more that works acceptably than there used to be, and it looks like Iris Pro will be a significant improvement on top of the current gen. Unfortunately it also looks like the wall they're going to run into after they leap over the performance one might be legal rather than technical (e.g. Mesa for Intel implements OpenGL 3 now, but due to Red Hat legal having problems with the patent coverage on some of that stuff Fedora won't actually ship all the bits, etc.).


Not only that, their all developer documentation and tooling for tracking down graphics performance focus mainly on DirectX and Windows.


Can someone explain the inner workings of this part of the industry? So I know enough about graphics cards to make smart buying choices, but I don't know anything about the business. The biggest question is how is this different than gpu's now? I can get the same model of card from EVGA, Nvidia, BFG, etc. I go with the one with best reviews and lowest price.


This is about the silicon rather than the unit. Currently third party GPUs are manufactured using Nvidia-manufactured GPU chips. Nvidia sells these and manfuacturers will use their reference design, perhaps with modification, to produce a functional GPU.

What is being discussed in the link is actually licensing the GPU design itself, which would allow other companies to fabricate their own Nvidia GPU containing chips. This allows third parties to produce system on a chip designs (SoC) which incorporate Nvidia GPUs connected to other stuff, perhaps a few ARM cores or some custom DSPs or similar.

This is the approach that ARM currently take with their designs. They don't operate any fabs, but simply sell what's known as the "IP Core" to third parties (Qualcomm, Broadcom, Texas Instruments etc...)


Thanks, that makes a lot of sense.


This is hugely significant. The No.1 GPU vendor in the PC market has seen the writing on the wall, and is not standing still.

They clearly see the PC market in decline, and mobile/embedded is the way of the future, with Android as the dominant platform. Compare this to only a few years ago, when the Wintel duopoly reigned supreme.

For years, people have been talking about Linux taking on Windows on the desktop, but it hasn't really materialised. However, Linux has done something far more significant - leapfrogged the humble PC and become the platform upon which both so many Internet services and mobile devices are built. Exciting times...


I wouldn't take it to mean that PCs are in decline. I would just take it to mean that nVidia recognizes mobile isn't going away, and that there is a lot of money to be made there. That's doesn't necessarily mandate the death of the PC. I don't think that anyone expects smartphones to disappear, and I'm still puzzled as to why so many are so anxious to see their counterpart machines that can facilitate serious work (or play, for that matter) go away.


I think the idea is that there are orders of magnitude more content consumers, than content creators. To a greater and greater extent, mobile devices are fully capable of displaying all the content you might want (tellingly, the exception being high-end 3D games). Once traditional PCs are only useful for content creators, the economy of scale for manufacturing those systems may no longer apply.


The way I see it, this will directly cannibalize Tegra sales. Up to and including Tegra 4, NVIDIA's mobile GPUs weren't based on their desktop GPU technology. Starting with the next-gen Tegras, they will have a Kepler-based (and later a Maxwell-based) GPU in their mobile SoCs, architected from scratch to be power-efficient. That will be a big deal. Imagine running CUDA apps in the palm of your hand.

But with this step, it seems to me that Tegra will not have any differentiator any more (unless NVIDIA keeps some features to itself). Could NVIDIA be adopting ARM's strategy?


I think this could be big especially in the mobile space. I can imagine that smaller ARM manufactures/designers will use this over PowerVR or there own graphics implantation. In desktop space I don't see any one willing to do that, maybe IBM for a super computer. But while this is exiting I think it also shows that they don't see a very bright future for them self. I guess with there less then hard selling ARM proc and the increasingly better graphics chips from intel the need for nvdia as a h/w manufacture is declining fast


Maybe Intel could license them and finally provide proper GPUs.


My first thought too. They have sunk an awful lot of money into GPUs without ever delivering a really competitive offering. And then there was Larrabee...


I attended a Larrabee session at Games Developer Conference Europe 2009, only to see the project get dumped an year later.

Now they have Xeon Phi, but wonder if it ever matter for games.


Xeon Phi is not for games. It could theoretically be used for accelerated software rendering but it does not have any video hardware (such as a DVI output connector).


I think this comparison says everything that needs to be said about Xeon Phi (for now):

http://clbenchmark.com/compare.jsp?config_0=15887974&config_...

Better luck next generation...


Larrabee was also not directly meant for games. Intel was pushing for vector processing on the CPU as replacement for the GPGPUs.

You don't need video hardware directly.

Using Larrabee or now Xeon Phi would not be much different from doing 3D math with vector instructions or co-processors, like on the old days, before accelerated graphic cards became the norm in the PC world.

The main issue is that so far Intel was not been able to produce anything that beats AMD or NVidias offerings in GPGPU market.


While they're at it open-source the nVidia Linux drivers. And end war and poverty.


Nvidia's Linux driver is more or less the same as their Windows (and MacOS, etc...) driver. They can't just open source it. Intel has an open source Linux driver and a closed source Windows driver.


Open source does not matter if they aren't able to deliver the performance professional developers require.


So now, projects that need parallel crypto-cracking can use GPGPUs instead of FPGAs. This works with any other strategic GPGPU application as well. The military no longer needs to buy a bunch of Sony gear just for the CPU. Defense contractors or grant-funded university research departments can design and use a GPGPU solution and have more control and customization than using off the shelf parts.


Feel like the next logical step. They're already customizing chips for PC constructors (I remember not being able not being able to install a linux on an HP laptop because the customized chip wasn't compatible with drivers). So the next step is just delegating the chip manufacturing.


It's not customizing, it's just configuration. Have a look at http://forums.laptopvideo2go.com/topic/9243-forceware-update...


This is a pretty big deal. Excited to see what sorta devices come of it.


Is Nvidia going to release EGL drivers for the desktop to enable Wayland? They wrote a lot about Android there, but didn't say anything about this.


Magic 8 Ball says "try running SurfaceFlinger or Mir on all ARM desktop machines so you can reuse Android drivers".


I'm not really interested in either Mir or SurfaceFlinger. Wayland on native glibc EGL drivers, that's something already. Besides, Mir is relying on the same drivers that Wayland does, so they are facing the same issue.


So, what does this mean exactly? They will license the design of the GPU cores so others will be able to build the chip themselves?


I expect they will do very much what Imagination Tech does now with their PowerVR graphics cores. License them to SOC designers who integrate them with a CPU (invariably ARM) and other peripheral cores (eg. DSP, Ethernet, I/O, etc) and then get someone like TSMC or Samsung to fab the silicon (since most chip designers are fabless).


>(invariably ARM)

Aside: Imagination Tech recently acquired MIPS.


There's also a PowerVR core in some of the Intel Atom chips.


If I had to guess, I'd say they provide an IP to the customer. Depending on the size and budget of the customer, they offer a premade IP, or semi-custom IP.


No, it's about letting others integrate NVIDIA GPUs into their CPUs/SoCs/etc.


How power efficient are they? With mobile devices moving almost as many pixels as desktop PCs nowadays it will be a power drain.


Not sure, but the latest Tegra offering has some surprisingly impressive performance/watt figures. (I attended an NVidia roadshow a while back, but don't have the brochures here.)


This could increase the competition in the right way, i.e. without further fragmenting APIs / ABIs.


So... They want to get out of the business of actually making things and just sell IP cores now?


They've never really been in the business of "making things" as end-users would see it. They've designed chips and reference PCBs, but they contract outside foundries for manufacturing (unlike, say, Intel, who have their own fabs) and the silicon goes to retail partners like ASUS and Gainward and many others to actually put into the channel (with minor downstream customization like custom heatsink designs). They've only recently actually sold some things directly to customers, like the SHIELD gaming handheld.


To be fair, NVIDIA tried to sell a couple of products directly to end consumers before. For example they sold a dual tv tuner card a few years ago (http://www.nvidia.com/page/dualtvmce.html), but it didn't go too well and was discontinued. These were always half-hearted attempts at testing the waters. NVIDIA never really put too much effort into it.


It will be interesting to see if anyone will use their GPU's for a product that directly competes with the SHIELD.


I'd say SHIELD is pretty much a PR effort to incite others to do exactly that - it creates buzz around the virtues of nVidia's mobile tech for gaming, and so could make licensing nVidia GPU IP an attractive selling point. I don't think nVidia expects SHIELD to be a high-volume seller on its own, it's pretty niche form factor-wise.

This is also why you have a bunch of graphically intense games on the Play store which are Tegra-exclusives, from developers who cut a deal with nVidia. For example Arma Tactics.


This is just in addition to their current business of selling chips and cards as far as I can tell.

I don't think anyone will want to produce exactly an nVidia chip, most likely the idea is that other vendors would want to integrate an nVidia GPU core on their mobile SoC (Kepler as an architecture is supposed to scale from mobile to cluster computing sizes) - perhaps Qualcomm or Apple, or hell, maybe even AMD now that they're starting to put out ARM chips. Heck, it'd be pretty interesting if Intel put integrated Geforce cores on their next gen CPUs, at least as an option. Although most likely neither AMD nor Intel would bother since they've got their own fairly strong mobile GPU architectures now and figuring out how to integrate an nVidia core would probably be more work for dubious gain.


> So... They want to get out of the business of actually making things and just sell IP cores now?

NVIDIA GPUs are fabricated by TSMC (Taiwan Semiconductor Manufacturing Company). NVIDIA never owned its own fab. The only difference is that now instead of keeping their IP cores secret they are willing to license them for a (probably huge) fee. Whoever licenses the core would probably also use TSMC for fabrication.


Does this mean that NVidia is moving towards being an ARM instead of an Apple?


Do you think ZF is like BMW?


Is Rolls-Royce like Boeing?

This is a fun game.


NVidia hasn't sold direct to customers in years, so comparing them to Apple is a bit odd.

The distinction now is that in addition to buying fabricated chips, you can license the GPUs directly so you can manufacture in tighter ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: