Hacker News new | past | comments | ask | show | jobs | submit login

> This is an integrated GPU

You say that like it's supposed to imply it's worse for some reason.

> battery-powered device with limited thermal regulation.

The Xbox one and PS4 were both released about three years ago. Almost three and a half by the time the Switch is released. That's a long time for Moore's law to operate, and it's easier for GPUs to take advantage of that phenomenon, since they are extremely parallel.

At 14-15 mm, the portable portion is actually a bit thicker (if smaller) than a new MacBook. Without the need for a keyboard or a clam shell design, that's probably quite a bit of usable space. I'm not sure why we should be surprised by claims of performance matching three year old systems.




>You say that like it's supposed to imply it's worse for some reason.

It does because of thermals.


Don't integrated units produce less heat? Do you mind expanding on this so I know what you are referring to?


They produce less heat because they are an inferior versions of a standalone GPU. It's harder to extract heat from two chips in the same space (CPU and GPU) than two separate chips, also you're limited in chip space for transistor count. (I'm assuming inferior refers to compute performance and integrated means integrated on to the same chip btw., if that's not what you meant then ignore my comment).


> They produce less heat because they are an inferior versions of a standalone GPU.

Isn't that entirely based on what they integrate? The market has traditionally been such that integrated units are for cost, and so are paired with cheaper GPU components. I don't know of any technical reason that integrated units use low end GPU specs, just business reasons. Any normal market pressures would be different in this case, based on the size of the order and specific needs.

> I'm assuming inferior refers to compute performance

Inferior based on maximum possible compute performance, sure. The top of the line dedicated GPU will have more leeway to work with than the top of the line integrated, but we're hardly talking about top of the line, we're talking about matching three year old hardware.


Moore's law has been dead for about a decade bro.

3 years is a long time but IMO not enough to being a 100+ watt SoC down to ~20-30 required for that form factor.

That being said, Nvidia is still much more energy efficient graphics wise vs AMD so I could see it getting fairly close - maybe 70-80%. CPU wise though I doubt they'll be as close just because of power.


The only reason to think Moore's law is dead now, much less a decade ago, is if you don't know what Moore's law actually is.[1] I don't blame you for that though, it was incorrectly explained for decades.

Moore's law refers to the number of transistors, not speed. We've consistently added more transistors to CPUs, but through separate cores,since there are problems related to doing that usefully with a single core. Now, that's been of mixed benefit to some programs, because not all problems, programs, or programmers can effectively take advantage of multiple cores, or at least not effectively. GPUs though, are doing work that is "embarrassingly parallel", and thus can effectively utilize this increase in transistors and cores. For example, the GeForce Titan X has 3072 CUDA cores.[2]

The GeForce GTX 760 (maybe a mid-range desktop part?) was released June 25, 2013 with a retail price of $249.[3] The GeForce GTX 980M (second newest generation mobile part) was released October 7, 2014 (two years ago!).[3] It trounces the desktop part from one year prior.[4] We've had a year since then. I don't think there's any problem hitting the performance of the last generation of consoles, at least as far as the GPU is concerned.

Edit: Switched from 980 to 980M to clearly indicate a mobile part, which only makes the release a year earlier for the mobile part, but doesn't really change the outcome.

1: https://en.wikipedia.org/wiki/Moore%27s_law

2: http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-tit...

3: https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...

4: http://gpuboss.com/gpus/GeForce-GTX-980M-vs-GeForce-GTX-760


ther perf gains you're seeing are from architectural and software advancements not pure silicon

you cite nvidia as an example against it but nvidia itself clearly does is not of the same opinion

http://www.techradar.com/news/computing/nvidia-vp-claims-tha...

http://www.techdesignforums.com/blog/2015/12/11/iedm-arm-yer...

http://www.eetimes.com/author.asp?section_id=36&doc_id=13298...

http://www.extremetech.com/computing/123529-nvidia-deeply-un...


> http://www.techradar.com/news/computing/nvidia-vp-claims-tha....

Poor reporting:

Moore's law describes that trend for computing performance – processing speed, memory capacity and the like - to double approximately every two years, named after Intel co-founder Gordon E. Moore, who first outlined his theory back in 1965.

That's not what Moore's law is, and is an example of what I meant when I said it was incorrectly explained for decades. The very first paragraph of the Wikipedia article I originally cited shows this.

The other articles seem to be about density scaling, which is only part of what goes into "how many transistors you can fit into a dense integrated circuit." Cores, obviously, allow us to mitigate this to some degree.

For example, in this[1] article, you'll find a graph that shows transistor count over time. It's fairly easy to see that as of a couple years ago, Moore's law was still going strong.

1: https://www.extremetech.com/computing/190946-stop-obsessing-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: