A bit meta, but for those who were not previously aware of Real World Tech, if you enjoyed this article then do dig into the archives. David Kanter does awesome writeups of (mostly) CPU and GPU architectures. I consider them a step beyond Anand Shimpi's write ups, the most recent of which was recently shared on HN, concerning Apple's Cyclone Microarchitecture [0]. They're published very infrequently, with the front page of RWT still having two articles from 2012 appearing.
Although it's not exactly relevant to the type of programming I do, I still find it useful to understand how modern processors actually work. At the very least, reading architecture articles helps me to understand exactly why certain processors are faster at specific tasks than others.
While I think it's great that David is being paid to write what he is good at, it's slightly disappointing to me that his long form writeups are going to be behind a pay wall at Microprocessor Report, which is covered in the recent article "Splitting my time between MPR and RWT" [1]. The submitted article is likely one of the few more in-depth articles he talked about being release in the next several months.
Jon and I are good friends, and we have very different styles. I don't write for a mainstream audience - I write for computer architects, programmers, and IT folks. My greatest strength (IMO) is that I can write clearly and in great detail about complex topics such as CPUs, GPUs, etc.
Jon's strength is taking a very complex subject and making it much more approachable to a larger audience. Sometimes that involves simplifying things, but that's quite reasonable.
The conclusion alluded to one of the most interesting parts of how Jaguar may age -- summarized horribly -- we've never had a SOC shared between consoles (XB1/PS4), desktops, laptops and tablets so closely that as the SOC shrinks and the software on it is optimized further and further, you could imagine a future in short-order (3-4 years?) where you might be able to buy a much-shrunk Jaguar-based tablet that plays PS4 games at full fidelity.
His point is this has the potential of greatly extending the life of the Jaguar SOC just by virtue of die shrinks and power savings and no significant rejiggering of performance through the generations.
I thought that was a good call out... we never had this apples-to-almost-apples comparison before to see this story play out.
The CPU side can definitely be matched but packing in the GPU power would take a lifecycle at least as long as the last console generations.
Personally I'm dismayed by the level of CPU performance found in the new consoles. Last generation quickly ran into a roadblock posed by the limited memory capacity but I think developers have been hamstrung by a lack of CPU horsepower right out of the gate. This is coming from a total layman who can only interpret benchmarks but I know that Jaguar's IPC is very weak compared to every other modern x86 design and the power of a single thread is a huge limitation no matter how much you try to program around it.
Ever since the PS3 games are very good at taking advantage of multiple cores due to extensive use of data oriented design. Consequently in my experience single-threaded performance isn't as important as you might think.
PS3 games made everyone learn to take advantage of multiple cores, but the incredible single-thread (more like thread and a half with careful manual scheduling) performance of the SPUs should not be discounted.
I think the CPU story is pretty reasonable this console generation. You trade single thread performance for a more powerful and space efficient design. It is just a lot easier to optimize the CPU than the GPU especially running on a gaming focused platform so console makers always trade CPU for extra graphics power.
If running PS4 (or PS5/6/7) games on tablets becomes a reality, you'd still use a hand-help controller most likely, at least until there's some bigger transition in gaming (VR controllers etc).
>>Fundamentally, AMD is the only company outside of Intel that is capable of designing, validating, and shipping x86 microprocessors.
Everyone always forgets VIA and their CPU division, Centaur Technology. The VIA Nano's are modern amd64 CPUs that perform well for their power envelope. http://en.wikipedia.org/wiki/VIA_Nano
VIA has basically been pre-empted by Intel's Atom line, which is more power efficient and higher performance. It's not that they have been forgotten, but they are still using ancient process technology...meaning uncompetitive products.
I had a Crusoe based ultrabook during college (2002ish-2005). Performance wasn't fantastic, but it was a third the size or less than most people's laptops, and I could get 8-10 hours out of it. I didn't even take my power cord to school with me.
Transmeta also later developed Efficeon which had a number of improvements over Crusoe (Hypertransport instead of shared PCI bus for chipset interconnect, larger cache and VLIW instruction length, SSE extensions). HP used it in some of their thin clients. A few years ago I compared it to VIA C7 http://fijam.eu.org/blog/benchmarking-transmeta-efficeon/
IBM & TI can also bring a microprocessor to market, and have pretty extensive experience with x86. To be fair, their last stand-alone x86 were i486 contemporaries, but it's not as though they don't understand the innovations that lead from i486 to i7 (see: Power7). If there was a market demand, either could probably produce an x86 compatible, depending on licensing issues. If you go back to 8086 and i286 then NEC & Fujitsu made x86 chips and still have formidable chip design capabilities.
As much as I wish for success of the only "real" competition Intel seems to have, I'm still fairly annoyed that they (AMD) seem to have completely abandoned their high-end CPU line (Vishera being the most recent specimen) and any plans of making its successor.
I frequently wonder, alt.history-style, what would modern Atari and Commodore computers look like.
Actually, that extends to other "classic" machines like Xerox Stars or Lisp Machines. Or Liliths. It would be very interesting what products those ideas would bring when combined with modern fab technology.
Although it's not exactly relevant to the type of programming I do, I still find it useful to understand how modern processors actually work. At the very least, reading architecture articles helps me to understand exactly why certain processors are faster at specific tasks than others.
While I think it's great that David is being paid to write what he is good at, it's slightly disappointing to me that his long form writeups are going to be behind a pay wall at Microprocessor Report, which is covered in the recent article "Splitting my time between MPR and RWT" [1]. The submitted article is likely one of the few more in-depth articles he talked about being release in the next several months.
[0] http://anandtech.com/show/7910/apples-cyclone-microarchitect...
[1] http://www.realworldtech.com/mpr/