After reading about great performance of newer ARM-based offerings I was surprised when I compared real-world performance at the same clock speed recently: ARM doesn't even come close to any recent-generation x86. This is certainly one very important measure of architecture.
A quick sunspider test with a US Samsung Galaxy S3 1.5GHZ snapdragon on Jelly Bean's likely highly-optimized browser shows performance very comparable to a first generation intel 1.66GHZ Atom 230 single core on the latest Firefox. Granted it's a mostly single-threaded test anyway but the ARM has both cores available and the test is pretty cpu-bound after it starts.
I'd estimate the latest i7 is at least 3x faster per-GHZ on this lightweight but fairly general (cpu-wise) test.
For heavy lifting, a recent i7 with it's cache size, memory bandwidth and accompanying i/o would probably compare to an ARM that is running at about 5x the clock speed.
I don't think that ARM can be suddenly declared the best at anything other than maybe performance-per-TDP.
Performance-per-cycle is the more difficult problem to solve... ask AMD how hard that's been since the original Intel core series appeared on the scene in 2006. Prior to this and after it wasn't just a chip clone maker, AMD dominated this metric.
But performance per TDP is what matters. Battery technology is getting better much slower than CPU technology. We're already at the point where CPU speed is "good enough," but we're not at the point where battery life is "good enough."
I've downsized from a Core 2 MBP to an iPad. It has 1/4 the RAM and runs at half the clock speed. Do I care? No! Web browsing is fast and fluid, photos load plenty fast, editing documents in pages is plenty fast. And it lasts through my whole 12+ hour workday, letting me leave the charger at home and often not even bothering to charge it every night. That's huge and much more important to me, and I'd imagine most people, than whether it can be imperceptibly faster.
I agree for typical single-user applications, but it's hard to call that the end of x86 as in the article. If ARM were matching x86 cycle-by-cycle in performance for less power that might be a credible claim that servers were next. Also high-end x86 devices run higher clock speeds with more cores so the gap in absolute performance is at least 1000% percent wide. I just don't see ARM displacing Intel when batteries or at least small form factor aren't involved. I don't expect to have an x86-based phone or tablet either.
My opinion (and I suspect the authors) is that the market for chips where batteries or form factor are not a concern will no longer be large enough to support a company of the size of Intel. RIM makes the finest high security physical keyboard phones anywhere. That is now a market sized for a company 1/10th their current size.
Exactly - The latest A15 benchmarks show the ARM chips being competitive speed-wise (in terms of computation power) to the first Core-2 Duo from 2006. Intel are 6 years ahead.
Yeah, the ARM cores have the power advantage generally at the moment, but under load there won't be much difference looking at what Haswell's ULV seems to be promising compared to the A15, and you can bet that the Intel chips will leak at idle much less and will probably have better sleep states.
Oh the horror. What a bunch of random, clueless and non-technical crap. I especially like the part where he compares the operating costs of Intel (a company owning a number of multi-billion-dollar fabs that are far above the competition in capabilities) and ARM holdings (a comparatively tiny intellectual property shop).
Unfortunately, it is exactly by the advice of such "strategically thinking" MBAs our industry is often run :-(
Yes, it's of course a grapes-to-watermelons comparison. But it's still worth considering.
This economic concept of 'dimishing returns' (small competitors can operate more efficiently than large ones) is to a big degree what enables and inspires the HN tech startup scene.
I like how he quotes the 7.6 billion primitive chips (embedded processors just a notch above a bunch of random logic gates) "shipped" by ARM to the high-margin (for Intel) PC market.
Even ARM's "embedded processors just a notch above a bunch of random logic gates" are fully-fledged 32-bit microprocessors with hardware support for preemptive multitasking, running at fairly respectable clock speeds. And that's the sub-dollar embedded stuff!
It's a lot of white-label Fabs, so manufacturers can take the blueprints, tweak and then do their own runs. I think the author's comparison is spot on - the ARM model is actually a killer advantage. It will actually shrink the value of the market while pushing the volumes through the roof. That sounds like a crazy thing to do - unless you're the one in control of the show.
>What a bunch of random, clueless and non-technical crap. I especially like the part where he compares the operating costs of Intel (a company owning a number of multi-billion-dollar fabs that are far above the competition in capabilities) and ARM holdings (a comparatively tiny intellectual property shop).
Yes, he sounds like someone in the late nineties comparing the nearly bankrupt Apple with competitors worth ten times more, like Sun and Dell.
Whenever I read about modern desktops, I think about the graphs in the Innovator's Dilemma. [1]
The incumbent players (Intel, Microsoft, Dell, HP) are all competing on the established metrics of performance & price, but those are no longer the metrics that matter. ARM is pushing the power efficiency angle. Apple is winning on industrial design.
The entire computer industry (excluding phones, which has obviously already been disrupted) is right on the verge of being flipped on its head. There were hints of that with the netbook wave, but they weren't quite good enough. The iPad and subsequent high end Android tablets are close, but not 100% there. But we are just about at the point where ARM vs x86 is equivalent for the mass market, and that really is going to shake things up.
Missing from this post is any sort of discussion about how modern x86 CPUs are poorly designed for the types of programs most people are developing these days.
Managed language runtimes represent the bulk of programs people are running on servers (think: Java/Scala/Clojure, PHP, Python, Ruby). These environments not only lack "mechanical sympathy" but also have requirements above and beyond what x86 can do.
To take Cliff Click's word for it, managed language runtimes consume 1/3 of their memory bandwidth on average zeroing out memory before handing objects to people. If x86 supported an instruction for doing just-in-time zeroing into L1 cache, this penalty could be eliminated, and that 1/3rd of memory bandwidth could be used for actual memory accesses instead of just zeroing out newly allocated objects. In an age where RAM is the new disk, this would be huge.
Unfortunately the amount of time it takes to get a feature like this into an Intel CPU is a bit mind boggling. Azul started talking to Intel about hardware transactional memory early last decade, and Intel is finally shipping hardware transactional memory in the Haswell architecture in the form of transactional synchronization extensions.
> Unfortunately the amount of time it takes to get a feature like this into an Intel CPU is a bit mind boggling.
I think this is an unfair feature comparison. Zeroing L1 cache is way simpler operation than TM which has been designed to support two modes of operation [legacy -- which only speeds up traditional LOCK-based synchronization -- and true TM], must support transaction aborts and restarts, etc. Also, 10 years ago, TM was still a very active research area -- i.e., people had no clue about which ideas were performant and scalable and, not the least, feasible to implement in HW.
PPC had that (zero one cacheline) and this caused a good amount of complaining when the G5 came out and increased the cacheline size, making dcba/dcbz worthless and dcbzl have different behavour between G4 and G5.
Working in microprocessors, I hear this a lot, but, in the long run, Intel has a fundamental advantage over ARM, and ARM doesn't seem to have a fundamental advantage over Intel [1].
People talk about RISC vs. CISC, and how ARM can be lower power because RISC instructions are easier to decode, but I don't hear that from anyone who's actually implemented both an ARM and an x86 front-end [2]. Yes, it's a PITA to decode x86 instructions, but the ARM instruction set isn't very nice, either (e.g., look at how they ran out of opcode space, and overlayed some of their "new" NEON instructions on top of existing instructions by using unused condition codes for existing opcodes). If you want to decode ARM instructions, you'll have to deal with having register fields in different places for different opcodes (which uses extra logic, increasing size and power), decoding deprecated instructions which no one actually uses anymore (e.g., the "DSP" instructions which have mostly been superseded by NEON), etc. x86 is actually more consistent (although decoding variable length instructions isn't easy, either, and you're also stuck with a lot of legacy instructions) [X].
On the other hand, Intel has had a process (manufacturing) advantage since I was in high school (in the late 90s), and that advantage has only increased. Given a comparable design, historically, Intel has had much better performance on a process that's actually cheaper and more reliable [3]. Since Intel has started taking power seriously, they've made huge advances in their low power process. In a generation or two, if Intel turns out a design that's even in the same league as ARM, it's going to be much lower power.
This reminds me of when people thought Intel was too slow moving, and was going to be killed by AMD. In reality, they're huge and have many teams working a large variety of different projects. One of those projects paid off and now AMD is doomed.
ULV Haswell is supposed to have a TDP ~10W with superior performance to the current Core iX line [4]. Arm's A15 allegedly has a TDP of ~4W, but if you actually benchmark the parts, you'll find that the TDPs aren't measured the same way. A15 uses a ton of power under load, just like Haswell will [5]. When idle, it won't use much power, and will likely have worse leakage, because Intel's process is so good. And then there's Intel's real low power line, which keeps getting better with every generation. Will a ULV version of a high-end Intel part provide much better performance than ARM at the same power in a couple generations, or will a high performance version of a low-power low-cost Intel part provide lower power at the same level of performance and half the price? I don't know, but I bet either one of those two things will happen, or that new project will be unveiled that does something similar. Intel has a ton of resources, and a history of being resilient against the threat of disruption.
I'm not saying Intel is infallible, but unlike many big companies, they're agile. This is a company that was a dominant player in the DRAM and SRAM industry that made the conscious decision to drop out the DRAM industry and concentrate on SRAMs when DRAM became less profitable, and then did the same for SRAMs in order to concentrate on microprocessors. And, by the way, they created the first commercially available microprocessor. They're not a Kodak or Polaroid; they're not going to stand idle while their market is disrupted. When Toshiba invented flash memory, Intel actually realized the advantage and quickly became the leading player in flash, leaving Toshiba with the unprofitable DRAM market.
If you're going to claim that someone is going to disrupt Intel, you not only have to show that there's an existing advantage, you have to explain why, unlike in other instances, Intel isn't going to respond and use their superior resources to pull ahead.
[1] I'm downplaying the advantage of ARM's licensing model, which may be significant. We'll see. Due to economies of scale, there doesn't seem to be room for more than one high performance microprocessor company [6], and yet, there are four companies with ARM architecture licences that design their own processors rather than just licensing IP. TI recently dropped out, and it remains to be seen if it's sustainable for everyone else (or anyone at all).
[2] Ex-Transmeta folks, who mostly when to Nvidia, and some other people whose project is not yet public.
[3] Remember when IBM was bragging about SOI? Intel's bulk process had comparable power and better performance, not to mention much lower cost and defect rates.
[5] Haswell hasn't been released yet, but Intel parts that I've looked at have much more conservative TDP estimates than ARM parts, and I don't see any reason to believe that's changed.
[6] IBM seems to be losing more money on processors every year, and the people I know at IBM have their resumes polished, because they don't expect POWER development to continue seriously (at least in the U.S.) for more than another generation or two, if that. Oracle is pouring money into SPARC, but it's not clear why, because SPARC has been basically dead for years. MIPS recently disappeared. AMD is in serious trouble. Every other major vendor was wiped out ages ago. The economies of scale are unbelievably large.
[X] Sorry, I'm editing this and not renumbering my footnotes. ARMv8 is supposed to address some of this, by creating a large, compatibility breaking, change to the ISA, and having the processor switch modes to maintain compatibility. It's a good idea, but it's not without disadvantages. The good news is, you don't have to deal with all this baggage in the new mode. The bad news is, you still have the legacy decoder sitting there taking up space. And space = speed. Wires are slow, and now you're making everything else travel farther.
The 10W chip will be a weaker SKU, probably even weaker than current 17W IVB CULV chips. They are not magically lowering the TDP from 17W to 10W for the next generation.
Cortex A15 will get the benefit of pairing up with A7. ARM says on average the energy consumption should be about half, compared to Cortex A15 alone.
Also Haswell is rumored to cost 40% more than an IVB Core. That's close to $300 for a CULV. That's simply not sustainable when it comes to the new market that is forming for tablets. $300 is more than the whole BOM for your typical $500 tablet. I doubt you'll see that chip in anything cheaper than $800, at a time when you get "good enough" tablets for $200 whole. Intel's competitor to ARM simply isn't the Core line-up. It's Atom, for better or for worse.
And as I said, Intel will lose not because of lack of expertise in making chips, but because of an unsustainable cost structure and business model (they now have to compete against several ARM chip makers at once, including Apple). The fact that they also have no momentum or market share in the mobile market doesn't help.
>Also Haswell is rumored to cost 40% more than an IVB Core. That's close to $300 for a CULV. That's simply not sustainable when it comes to the new market that is forming for tablets. $300 is more than the whole BOM for your typical $500 tablet. I doubt you'll see that chip in anything cheaper than $800, at a time when you get "good enough" tablets for $200 whole. Intel's competitor to ARM simply isn't the Core line-up. It's Atom, for better or for worse.
In terms of OS for an x86 tablet, you're looking at Windows 8, or Linux with a custom shell, and that's it. There's an unofficial x86 port of Android, but I wouldn't stake any real product on that without any support from Google.
Since MS has already established the baseline price for the Win8 tablet around $800, and they're marketing it more as a tablet PC that can do everything you do with a normal desktop or laptop than an iPad, is this really that much of an issue?
There is an official port of Android(done by Intel) on x86, it powers the several Atom phones out there(Orange San Diego & etc).
You can even download an emulator build for Atom, which perform great(it has support for virtualization extensions).
The real issue for intel is 7W is not worth 100$ to most consumers. OEM's have been selling good plenty of 400$ laptops for a while and tablet manufactures can always down-clock a laptop CPU from AMD.
However, this is the reason Microsoft is trying to stick to the 800$ price point for as long as it can. Cheaper options are more popular and they don't want most people to hear how 'crap' Windows 8 tablets are based on hardware anytime soon.
>The real issue for intel is 7W is not worth 100$ to most consumers. OEM's have been selling good plenty of 400$ laptops for a while and tablet manufactures can always down-clock a laptop CPU from AMD.
This is the kind of logic that says selling one million of $1 products is better than selling 100.000 of $10 products. I beg to differ...
The number of units sold is capped by the market size. If someone else is selling $1 devices and you are selling $10 but the $1 device meets the need, you won't be selling much. Microsoft/Intel might bank on those markets being "distinct" - but that's getting more tenuous each year.
The markets compact over time and $1 devices will kill $10 devices.
Overlaying instructions on top of each other by using different address-mode or condition flags isn't new - I remember 68K doing it nearly 30 years ago - and it's not that hard to deal with. I don't think the argument about ARM being harder to decode carries much water, though some of your other arguments are more compelling.
"6] IBM seems to be losing more money on processors every year, and the people I know at IBM have their resumes polished, because they don't expect POWER development to continue seriously (at least in the U.S.) for more than another generation or two, if that. Oracle is pouring money into SPARC, but it's not clear why, because SPARC has been basically dead for years."
both IBM and Oracle have committed to two more generations of POWER nor SPARC. also there is Fujitsu keeping investing into SPARC. these uarchs are not going to go away anytime soon. they are high-margin niche products in the enterprise space and government with considerable footprints and huge long term service contracts attached to them.
POWER and SPARC are not competing against low-power CPU's but aim for the high-socket count, large unified memory, RAS market which for some workloads is the only alternative.
especially SPARC while having shrinking market share still makes Oracle more than a billion annually. afaik IBM's Power division is not loosing money either.
long story short: in the next five years there will be at least four ISA's (ARM, SPARC, POWER, X86) in the server space, but not all compete for the same markets.
[x] Isn't is possible to have your code base compiled soly for AMRv8 and leave the baggage and compatibility behind in one or two revisions. I am guessing Apple may be able to pull this off with a pure ARMv8 advantage.
I continue to hear how MIPS is still technically and architecture better then even the cleaner ARMv8. But they never succeed. So how an ISA succeed depends on a lot of other factors/
I'd like to point out that armv8 (the 64-bit version of arm) fixes basically all of the decoding issues of arm. It's really more of an entirely new isa than just an extension of old arm, and it ended up as a really clean and good high-performance isa.
Intel's problem is the technology, it is their business practices. ARM is all about customization, and Intel is about commodity motherboards. Intel has restricted every item that might give one PC manufacturer an advantage (e.g. no custom chipsets).
This is missing the point. The third party chipset market died off for exactly the same reason that you posit: all the features that were in the "chipset" are now in the SoC.
So what you seem to be saying is that ARM only makes CPU cores (and now a GPU, I guess), so there's a wide market available for NVIDIA and Qualcomm to enter to provide a complete SoC.
Intel, on the other hand, makes CPU cores, and GPUs, and display controllers, and DRAM controllers, and USB controllers, and PCI bridges, and audio hardware. And they put all that stuff on a single chip for their customers.
... and somehow you're spinning this as an advantage for ARM Ltd.?
No, the third party chip chipset market died because Intel killed it. NVIDIA had a superior chipset with a better GPU and Intel stopped them. Intel wants commodity production of a very limited number of designs.
Intel doesn't want customizations that can be offered by other companies. Intel wants Intel's GPU not a choice of NVIDIA, S3, ARM, and soon AMD/ATI. Intel wants to ship "Phone Motherboard 1", etc.
Its an advantage because ARM lets other companies play and Intel won't.
This is still spinning. What you're saying is still isomorphic to: ARM has an "advantage" because it's a smaller company with fewer product offerings and no ability to fab its own chips. That's just insane. ARM Ltd. (the company) is successful in spite of these facts, not because of them.
Maybe what you're really trying to talk about is the "ARM ecosystem", where the big mix of players has a market incentive to try new stuff. And there you might have a point. But it's certainly no disadvantage to Intel specifically -- every one of those players wants to be doing what Intel already is (Apple is very close already, with their own CPU core and SoC design).
'ARM has an "advantage" because it's a smaller company with fewer product offerings and no ability to fab its own chips.'
I think what he's saying is Intel is at a disadvantage because it's the only company with the ability to fab its designs. Therefore, any new designs will have to compete for production capacity and catalog space with whatever the currently most-profitable product is.
Large companies have great difficulty managing multiple product lines that diverge widely in their natural profit margin.
I was saying Intel doesn't want any customization because it needs volume and price to maintain profits. Customization subtracts from volume. They have PC mentality not a Mobile mentality. Power consumption isn't the problem, but business model is.
The point of the article is that intel has been disrupted by ARM. You can disagree with this or not, but all these points are kind of irrelevant. People making mobile devices care about power efficiency, and Intel is behind for another year or two. Worse, even if it catches up it needs to compete with ARM's business model which cannot support Intel's revenue model. The point is that ARM allows — as you say — anyone make their own SoC and drain the profits from Intel's pool — they're not going to give that up unless Intel offers some incredibly compelling advantage, and it's not clear what that might be.
>The point of the article is that intel has been disrupted by ARM.
Has it?
Intel is a much smaller part of all CPU's sold, that's true, but it's also true that the market for CPU's has increased exponentially in the last few years. The places where Intel is losing is places where they have never actually competed in.
A decade ago, if you were looking for a low-power CPU for a mobile device, you sure as hell weren't looking at X86. You were going with an ARM solution. That hasn't changed, but the market for those CPU's has grown incredibly.
Intel actually did license ARM tech at one point with their XScale chips. It also wouldn't matter if they never competed in the phone market before, because the phone and tablet market is eating away at the desktop/laptop market already. People are probably more likely to want a flashy new phone or tablet, than upgrade their desktop or notebook.
I'm not spinning, and no ARM's size isn't an advantage (I didn't say it was).
Intel wants to build everything in volume. Intel's current business model does not benefit from specialization or customization. Intel needs to make a large profit on each CPU sold. Intel drove out of the market anyone who could build a chipset since it interfered with volume and profits. Intel would be happiest with their current business model if they built one laptop motherboard, one server motherboard, and one desktop motherboard.
This is a strategy built for the PC market. It does not have anything to do with the current mobile market (non-laptop).
Samsung and Apple want to build the best end product. They want to put things in and leave things out. They cannot do that with Intel, but can do that with ARM. Intel doesn't allow or want customized SoC. Apple and Samsung do. Other vendors also make their own SoC from ARM cores. These SoCs provide different benefits. Having a common instruction set allows switching to another SoC when needed.
> Intel wants Intel's GPU not a choice of NVIDIA, S3, ARM, and soon AMD/ATI.
I haven't seen any sings of Intel trying to push their GPU's into ATI/NVidia's niche market (gamers).
NVidia and ATI (now AMD) GPU's have always had their strongest consumer base among gamers [1]. Intel GPU's have never been in the same class as their contemporary NVidia/ATI cards on any measure -- triangles per second, texture bandwidth, gigaflops. Intel also generally uses shared-memory architecture, which means their memory bandwidth is limited and contends with the CPU.
Intel's GPU's are focused on being a low-cost, low-power, on-board graphics solution. As long as they can run a 3D UI and play HD video, they're not going to push the performance envelope any more, for the good and simple reason that they don't want to incur additional manufacturing cost, chip area, design complexity and power consumption for features that are irrelevant to non-gamers.
[1] By "gamers," I really mean anyone who's running applications that require a powerful GPU.
I completely agree with performance per watt Intel is on the path to be where they need to be but ARM is on path to be par with a chip that even AMD's x86 offerings can't touch on a core by core basis. Now with Microsoft's Windows 8 trying it's very best to be a tablet experience it seems by sales figures people much rather prefer an Android or Apple tablet compared to even i5/A10 laptops with Windows 8. The technicalities you present are right on the money but what we have here is a perfect storm. Microsoft wants to be a tablet OS manufacturer and is skirting the traditional desktop/laptop experience leaving Intel with no real face value to their contribution.
When people go to their local stores and see rows of tablets that look like tablets and rows of laptops that look like tablets and rows of desktops that look like tablets, well they just seem to get actual tablets. Sure an Intel i3 will handily beat out the upper echelon of ARM offerings but with Android and IOS be entirely optimized for this experience we are finally realizing what AMD fans have been shouting for decades, the extra horsepower really does not come into effect often enough to make it a deal breaker. Your web page may load 40% slower on an ARM rig but when the Intel model will load it in 1.5s and you tablet will load it in 2s we now experience the Law of Diminishing Returns. If a dual core ARM A15 can consistently run at around 40-50% of the speed of an i3 mobile processor, a quad core ARM should settle at around 60-75% while being tasked to do much, much less.
With Intel and ARM you are also dealing with 2 very different ecosystems. x86 has had to be fast because most of the applications you will run on a daily basis are likely not really optimized, profiled or threaded anything above a couple compiler switches. They have to be fast because the code is so slow. Now with Android and IOS the language, libraries and sand-boxing improves the underlying mechanisms to the degree that most of the code that matters is optimized by Apple and Google where the equivalent Microsoft Windows libraries are not as optimized and in many cases so specialized that it gives a look and feel of a Wordpad type app rather than what you are really after.
Basically I feel by looking strictly at Intel's raw performance as to the reason why a platform will succeed is improper and an unfair comparison. People are moving en-mass to tablet and smart phone ecosystems not because they are faster or run by a particular processor but because it feels like a custom solution and the integration overall is acceptable. So I don't think it is ARM v Intel but rather tablet v notebook and laptop. If Microsoft keeps up with making their desktop OS look and feel like a tablet OS people buy tablets and Windows 8 doesn't have the applications, word of mouth or market penetration to make that possible right now.
If things don't change and quick we may just have a Microsoft and Intel "Double clutch"
>x86 has had to be fast because most of the applications you will run on a daily basis are likely not really optimized, profiled or threaded anything above a couple compiler switches. They have to be fast because the code is so slow. Now with Android and IOS the language, libraries and sand-boxing improves the underlying mechanisms to the degree that most of the code that matters is optimized by Apple and Google where the equivalent Microsoft Windows libraries are not as optimized and in many cases so specialized that it gives a look and feel of a Wordpad type app rather than what you are really after.
This is so wrong, I actually don't know where to begin.
1. It's true that most code isn't optimized for x86. But most code isn't optimized, period. Optimization is freaking hard. Android and iOS aren't necessarily better optimized than Windows. And Linux, especially RHEL, is screaming fast on the new Intel chips. Windows isn't that terrible, either.
2. Sandboxing actually hurts performance, because it requires an additional layer between the OS and userland to make sure that the code the user is executing is correct.
3. None of this actually matters for chip architecture, since #1 is true of code in general, and #2 doesn't have any special architecture-based support.
When you program for Android/IOS how much of the logic you write is referenced from optimized libraries and how much is your own craft? Now look at the entire Market Place / App Store and figure how many of those apps entirely rely on Android / IOS optimized libraries.
While it may be true that some may wander off the beaten path and write their Apps in OpenGL ES directly and maybe even C/C++ most rely on frameworks and libraries already built in (which are indeed optimized).
You're still running on the blind assumption that Apple and Google are magically better at optimization than everyone else. I would not make that assumption, because, as I said before, optimization is hard. There's a billion variables that goes into your code's performance, and tiny changes can completely ruin performance-or make it awesome.
Look at Android(as an example). For the first few years of existence, the OS was plagued with issues of bad battery life due to poorly optimized code and bugs. Android 3.0 was basically scrapped as an OS due to bad performance.
iOS 5.0 had absolutely horrible battery life due to a bug. The Nitro Javascript engine isn't available outside of Safari, too.
So you are suggesting that it is out of reach for Google and Apple to publish libraries and frameworks that are optimized for a specific hardware platform (ARM A7 + A15) or is it that since Windows X already optimizes Intel / AMD offerings to an overwhelming extent that the differential would be moot?
...where Apple beats three community-developed libraries. But maybe it just wasn't that hard to beat them.
I'm also skeptical because I don't see how Apple has any incentive to optimise code ever. Their devices (ARM & x86) are doubling in CPU power left and right while the UX basically stays the same. The second-to-last generation inevitably feels sluggish on the current OS version...which just happens to be the time when people usually buy their next Apple device. Why should they make their codebase harder to maintain in that environment?
>...where Apple beats three community-developed libraries.
That's just in one very restricted area (JSON parsing) where there are TONS of third-party libraries of varying quality for the exact same thing. Doesn't mean much in the big picture.
>I'm also skeptical because I don't see how Apple has any incentive to optimise code ever.
And yet, they use to do it all the time in OS X, replacing bad performing components with better ones. From 10.1 on, each release actually had better performance on the SAME hardware, until Snow Leopard at least. They had hit a plateau there I guess where all the low hanging fruit optimisations were already made.
Still, it makes sense to optimise severely, if not for anything else to boast better battery life.
And has there ever been an iOS update that has made things faster on the same hardware?
I don't think that Apple is intentionally making things slower, which is what I'm trying to say with the JSON parser (it is easy to write a wasteful implementation). But in the big picture, they're not optimising much either.
>"When you program for Android/IOS how much of the logic you write is referenced from optimized libraries and how much is your own craft?"
It doesn't matter either way.
For one, Apple and Google aren't that keen on optimising their stuff either.
Second, most desktop applications use libraries and GUI toolkits by a major source, like Apple and MS, so the situation regarding "a large part of the app is made by a third party that can optimise it" is there for those too.
Third, tons of iOS/Android apps use third party frameworks, like Corona, MonoTouch, Titanium, Unity, etc etc, and not the core iOS/Android framework.
Fourth, the most speed critical parts of an app are generally the dedicated stuff it does, and not the generic iOS/Android provided infrastructure code it uses.
The problem is 2-fold ARM is being supported by a lot of companies - Apple, Samsung, Microsoft, etc.... vs Intel by itself on x86 basically.
Second, ARM chips are too cheap. Intel's biz is built on $100+ chips. ARM chips are like $10. If intel's chip prices drop to say $25, they don't have nearly the money for R&D.
X86 won't die, but it can't grow and over time that's going to hamstring Intel.
I don't know that X86 can't grow over time (servers) but I think you're spot on that dirt cheap chips, and big companies that want to keep them dirt cheap, are in some sense Intel's deeper problem. Intel could make up any given technical gap eventually, but it's hard for them to shut down competition when other, very well-funded players want to maintain it.
But on the flipside, ARM right now is very, very low-end -- the fancy new Cortex-A15s only match up against Atoms in single-thread CPU performance, and Atom < ULV < LV < desktop/server. You get away with it in mobile because of lower user expectations and heavy use of the GPU. When you look a little further out (Cortex-A53/57 on newer processes) you can picture ARM in actual client computers, or at least in super-zippy mobile gadgets that some people will happily replace their computers with. (Consumer software will probably adapt to an ARMy world too--use the GPU well, adapt to slower cores, use UI tricks to hide some CPU-caused delays.)
But I can't see an ARM chip that acts exactly like a Xeon within the next few years. I bet ARM finds some niches in the datacenter, where servers today have far more CPU than they need, or applications adapt well to a sea of tiny slow cores, or both. (For instance, Facebook uses AMD memcached boxes; they could just as well use ARM, and are looking at it. Intel will make cheap slow cores for those use cases, too.) And I bet ARM will put some price pressure on Intel. But all the things a top-of-the-line Intel chip does to maximize instruction-level parallelism will be really hard for anyone to copy for a very long time.
Why can't it grow? Server builders still want the fastest CPUs. For them, x86 still gives the best performance per watt and dollar. Yes, ARM servers are starting to appear but I don't think they're up to par with Intel yet.
And the cloud needs servers. Lots and lots of servers.
True, Intel faces stiff competition here. But folks are sometimes forgetting Intel wasn't always a monopoly in its field. It had competition, lots of it over the years. I wouldn't bury them just yet.
I think programmers overstate the demand for servers vs. clients. By definition client machines could be 10 - 100,000 per server. So, yes, intel can sell a lot of powerful server chips, but the high volume client machines like desktops, laptops, phones, tablets, etc. are moving away from x86. That is bad for intel and AMD and x86 in general.
Ever since I've read the innovator's dilemma around 2006 or so, I've tried to watch for other examples of disruptions happening in the tech industry, including laptops (disrupting PC's), iPhone/Android phones (disrupting Nokia/RIM smartphones), iOS/Android (disrupting Windows/Mac OS) and a few others.
But while you could still find something to argue about in some of those case, especially when the "fall off a cliff" hasn't happened yet for those companies (disruption takes a few years before it's obvious to everyone, including the company being disrupted), I think the ARM vs Intel/x86 one has been by far the most obvious one, and what I'd consider a "by-the-book" disruption. It's one of the most classical disruption cases I've seen. If Clayton Christensen decides to rewrite the book again in 2020, he'll probably include the ARM vs Intel case study.
What will kill Intel is probably not a technical advantage that ARM has and will have. But the pricing advantage. It's irrelevant if Intel can make a $20 chip that is just as good as an ARM one. Intel made good ARM chips a decade ago, too. But the problem is they couldn't live off that. And they wouldn't be able to survive off $20 Atom chips. The "cost structure" of the company is built to support much higher margin chips.
They sell 120 mm2 Core chips for $200. But as the articles says, very soon any type of "Core" chip will overshoot most consumers. It has already overshot plenty, because look at how many people are using iPads and Android tablets or smartphones, and they think the performance is more than enough. In fact, as we've seen with some of the comments for Tegra 4 here, they think even these ARM chips are "more than enough" performance wise.
That means Intel is destined to compete more and more not against other $200 chips, but against other $20 chips, in the consumer market. So even if they are actually able to compete at that level from a technical point of view, they are fighting a game they can't win. They are fighting by ARM's rules.
Just like Innovator's Dilemma says, they will predictably move "up-market" in servers and supercomputers, trying to chase higher-profits as ARM is forcing them to fight with cheaper chips in the consumer market. But as we know ARM is already very serious about the server market, and we'll see what Nvidia intends to do in the supercomputer market eventually with ARM (Project Denver/Boulder).
As for Microsoft, which is directly affected by Intel/x86's fate, Apple and Google would be smart to accelerate ARM's takeover of Intel's markets. Because if Microsoft can't use their legacy apps as an advantage against iOS and Android, that means they'll have to start from scratch on the ARM ecosystem, way behind both of them. Apple could do it by using future generations of their own custom-designed ARM CPU in Macbooks, and Google by focusing more on ARM-based Chromebooks, Google TV's, and by ignoring Intel in the mobile market. Linux could take advantage of this, too, because most legacy apps work on ARM by default.
The biggest difference is that Intel has consulted closely with Christenson, and is not afraid to cannibalize their own market to retain dominance. The Intel Celeron came directly from their consultations with Christenson. The Celeron signficantly dented Intel's profits temporarily but was the beginning of the end for AMD.
And certainly the price is a very significant factor. But remember that ARM sells an order of magnitude more chips than Intel does. So if Intel is successful, they can make it up on volume, at least to a degree.
I dont recall Celeron being a major problem for AMD. The things that did hurt were
a) Intels effectiveness at preventing AMD SKU's from hitting markets
b) The Core 2 family from Intel
c) AMD insisting on shipping 'native' dual / quad cores with worse yields - there wasnt any advantage to the end user and I would imagine the yields were worse
I know, but that was back in Andy Grove's time. I don't think Paul Otellini ever understood the innovator's dilemma that well. In a sense, it does seem like they get it now and try to compete with ARM, but I'm not so sure this came from within the company. I think they were pressured into it by stakeholders and the media a few years ago.
But again, even if they succeeded making competitive chips against ARM, that doesn't equal market success in the mobile market, and it doesn't equal that they will survive unless they take serious steps to survive in a world where they are just one of several companies making chips for devices, and where they might not even have a big market share of that, and where they make low-margin chips. Bottomline is they need to start firing people soon, restructure salaries, and so on. I think this is why Paul Otellini left. He didn't want to be the one to do that, and be blamed for that.
Intel's 120mm2 Core chip is built on a smaller process than the ARM based competition meaning there are far more transistors on that $200 chip than the one you are comparing it to. Also keep in mind that Intel doesn't need to charge $200 to make a profit. The current Atom chips are price competitive with ARM based alternatives. The Atom parts aren't quite up to par with an A15 based part yet but I think it'd be foolish to count Intel out.
The primary cost of a SoC is manufacturing. Process advantages mean that you have access to cheaper transistors that have better performance and power characteristics. The easiest way to improve the ratio of performance to anything in microprocessors has always been to make it smaller. There have been far too many words wasted on the role of instruction sets and architectures. Those things matter but that's the easy part. The hard part is getting a meaningful advantage in the manufacturing side, which is what Intel has. This is precisely why AMD is dying. They can't even undercut Intel because Global Foundries is so far behind Intel that they physically can't produce an equivalent product for less despite Intel's ~60% margins.
I think what you will see is Intel getting more aggressive in the mobile space in the next couple years because they are going to want to ensure that TSMC doesn't get a chance to catchup. TSMC is the real key to anyone threatening Intel not ARM.
Great analysis. Yes, cost structure; yes, ARM is the incumbent with all those forces in their favour, such as being tuned to the upgrade rhythm of the phone market.
A way out for Intel is their world-leading foundries, with process shrink a generation ahead. It's been suggested they manufacture ARM SoCs, and sell at a premium. But there isn't really a premium market... except for Apple, and its massive margins. And Apple is feeling the heat from competitors hot on its heels. Therefore: Intel fabs Apple's chips. Intel gets a premium. Apple gets a truly unmatchable lead. It's sad for Intel, but Andy Grove has a quote on the cover of The Innovator's Dilemma. They know the stakes.
The nice thing for consumers would be a x2 fast or x2 battery life or half-weight iPhone/iPad next March, instead of in 1.5 years.
BTW, re: Tegra 4/overshoot - In the next generation, when silicon is cheap enough for oct-core, because we don't need the power (and can't utilise multicore anyway) it will instead lead to the next smaller form-factor. But smaller than a phone is hard to use, for both I and O. A candidate solution is VR glasses - because of size.
> But smaller than a phone is hard to use, for both I and O. A candidate solution is VR glasses - because of size.
If we could get wireless hdmi down and standardized (or wireless displayport, or some display standard over some wireless frequency band) I can easily see computers going so far as being solar powered interconnected nodes, where instead of you lugging around hardware you just link up to the nearest node and utilize a network of tiny quarter sized chips as capable as a modern dual core A9 or some such that runs off solar.
I wonder if Ubuntu has a chance in this setup. In a real productivity driven environment, having a single-app-in-fullscreen OS is just absolutely insuffient for purpose. But Ubuntu already runs on the Nexus 7 (albeit really slowly because Compiz and Unity are... lackluster).
I don't think the Unity desktop will make it, but I definitely see some windowed environment inserting itself into the gap between Android tablets and Windows laptops on the high end ARM chips. And unlike Windows, the GNU core has a ton of software written that runs on it already, and thanks to being open source and compiled against GCC, this stuff rarely has large issues besides performance running on ARM.
I only say Ubuntu because it seems Canonical is the only market force trying to push a GNU os commercially. Red Hat seems content to let Fedora just act as the playground for Enterprise, and Arch / Mint / Gentoo / Slack / etc don't have the design goals (ease of use for a completely new user) or infrastructure (Debian and its ultra-slow release schedule wouldn't fly).
(The "cost structure" of the company is built to support much higher margin chips.")
That is where you got it completely wrong. Intel can produce ARM based SoC that earns nearly the same margin as they are currently selling their CPU.
Not to mention you keep referencing x86 and Intel as the same thing. As the way they are and for the foreseeable future. I could literally bet anything that Intel wont die. Simply because Intel could still be the BEST Fab player you could ever imagine. In terms of State of Act Fabs, they beat TSMC, UMC, GF, and Samsung Combined! And Intel aren't dumb either, that have the best of the best Fab Engineers. And the Resources and R&D that is put now for the coming 3 - 5 years in the future.
So Intel wont die.
x86? That depends. If you look at the die shot of SoC you will notice CPU are playing less and less part in die areas. It used to be 50+%, Now it is less then 30%. CPU, or ISA is becoming less important. You will need a combination of CPU, GPU, Wirless, I/O and other things to succeed.
> But as the articles says, very soon any type of "Core" chip will overshoot most consumers.
Your claim is essentially "Most people will never need today's high-end chips, let alone anything more powerful." This could have been equally well said in 1998. How do you know you're not as wrong making that claim now, as you would have been making that claim then? What's different?
Today's computers are powerful enough to comfortably run today's software. Tomorrow's computers will have a lot more power than today's software needs; but that's irrelevant, because they'll be running tomorrow's software instead.
To lay out my case in a little more detail:
"As hardware resources increase, software will bloat until it consumes them all." This is probably somebody's law, but I don't know who off the top of my head.
You don't really need more than ~300 MHz, 128 MB to do what the vast majority of users do: Word processing, email, and displaying web pages.
Usage patterns may change as you increase the amount of computing power you need. For example, I usually have a large number of tabs open in my web browser -- I probably wouldn't use the browser in this way if I had much less memory.
Some software is just bloated. My Windows box has dozens of different auto-updaters that run on every boot. Steam does literally hundreds of megabytes of I/O doing -- something, Turing only knows what.
Of course all the latest UI's have all kinds of resource-intensive 3D effects, run at unnecessarily high resolutions, use antialiased path rendering for fonts instead of good old-fashioned bit blitting, et cetera.
The point is that, as the standard hardware increases, OS'es and applications will add marginally useful features to take advantage of it. Users will learn new usage patterns that are marginally more productive but require much better hardware. As standard software's minimum requirements rise, people buy new hardware to keep up.
This is not a novel idea; it's been the story of computing for decades, and a trend that anyone who's been paying any attention at all to this industry has surely noticed.
Except that's exactly not what's going to happen - the shift is determinedly away from heavy desktop apps to lightweight clients and more work done on the server backend. That causes a dramatic split in processing needs: clients get thin (requirements flatline in procesing) and servers get fat (now all the work is server side). What used to be a "unified" market suddenly is not any more.
> "Just like Innovator's Dilemma says, they will predictably move "up-market" in servers and supercomputers, trying to chase higher-profits as ARM is forcing them to fight with cheaper chips in the consumer market."
Which way is up? Intel's been moving down in terms of per-core wattage since 2005, putting them closer to direct competition with ARM. Anybody can glue together a bunch of cores to get high theoretical performance, but it's Intel's single-threaded performance lead that is their biggest architectural advantage.
In this case "up" is defined by potential profit/chip. There are structural and easy to miss business reasons why it is very hard for any company to successfully move into markets where the profit margins are below what they are structured to expect, and it is very easy for a company to move into markets where the profit margins are above what they are structured to expect.
Intel has to improve their power because a major market for them - server chips - is full of people who want to spend less on electricity for the same computation. Despite this push, they have made absolutely no inroads into the unprofitable mobile market. By contrast ARM, which already has the required power ratios, has every economic incentive in the world to move into the server market. Unless Intel can offer a good enough power ratio to offset the higher costs of their chips, ARM will eventually succeed.
I thought one of the major advantages Intel held was that it owned its own manufacturing, allowing it to iterate more quickly. However, this article seems to claim it needs to shed itself of that. Is it not really an advantage in x86 then? If it is an advantage in x86, why isn't it here? Or is this just a case of blindly copying ARM's business model?
Thinking purely objectively, it's always an advantage to have the most money in the bank, the largest customer installed base. Owning the best semiconductor foundaries is a big advantage for the forseeable future.
But so once was also owning the world's largest battleship. Things change. How often do the most disruptive changes come from or favor those with the largest physical plant?
I think it depends on how it is structured. Owning your own manufacturing is likely more easy to iterate than hiring other to other companies to manufacture for you.
What ARM is doing is the opposite - they are the ones hired by the foundries to provide an architecture, and then it is up to the foundries to make the architecture run. This largely decouples the architecture design from manufacturing constraints giving them free reign to innovate.
If a foundry can't cope, then that will not hurt the bottom line for ARM since other foundries can, whereas if you own the manufacturing, you actually have to pay to avoid obsoleteness.
Not sure about the end of x86 but he is right about one point I've been shouting about for years; mainstream computers are far too powerful for the average user. My mother reads mail and watches pictures of kids/grandkids with her computer; what is the i7/8gb/500gb with a crapload of gpu cores for? Why pay for that kind of power while a cheap Android laptop would easily suffice? My parents, grandparents, uncles or even my siblings and cousins have 0 need for that power nor for Windows. None of them. They notice the difference when they have/touch an iPad or Android pad/computer; they find it easier to wield; they use a handful of apps anyway. So because it has manufacturing advantages, Intel, in my eyes, doesn't have to strive for power or compatibility for the future chips; they just need to use almost no battery. Only thing I hear non computer savvy people talk about is battery life and 'clear screen'. So high res (nexus 10) screens, screens you can view without squeezing your eyes in bright sunlight, solar cells invisibly built in and a few days battery life for a <= $500 price and you'll be selling until silicon runs out.
Even for coding you don't really need all that power most of the time; if you are compiling big source trees, sure, but why not just do that in the cloud at EC2 or a dedi server? So you can freely work on your laptop. Game playing and very heavy graphical or music work I can see you need a fast computer in front of your nose for, but further?
I wonder when we'll start seeing apps running on ARM capable of matching current x86 based content creation apps?
2 years? 3?
Intel will catch up with power consumption. The biggest thing going for ARM is price, and because of price their user base is blowing up much faster than Intel, on more types of devices, and in more parts of the world. Most of the developing world's contact with computing is/will be ARM phones and tablets, and the number of people developing software for ARM will skyrocket
I was disappointed when I tried a fractal simulator on my Nexus 7 recently, and it couldn't zoom smoothly. Perhaps not the most common task in the world, but I think there is still demand for more computing power out there...
I used a tablet for a while; now I use an ultrabook. I'm hoping tablets qua tablets are a fad, and going forwards new systems will combine the good parts of tablets (small, light, touchscreen, long battery life) with the good of traditional machines (keyboard, wintel backwards compatibility, performance)
Companies are different from instruction sets, and the disruptor is the ARM instruction set... on Qualcomm. QCOM is already called "the Intel of the mobile world" and, as the world is going mobile, thar be the disruptor.
Analyses that repeat the “post PC” mantra in its various forms may be correct in doing so, but the mantra is getting threadbare. Since I’ve read this sort of thinking so many times (desktops are dead, Intel is doomed, ARM is the new hotness, etc.), I don’t find it terribly interesting to hear the same restated. Don’t get me wrong, I appreciate the detailed analysis provided by the author, but the thesis is unsurprising.
Here’s what I would like to read if a technology journalist could dig it up: What kind of strategic planning is going on within the halls of Intel, Dell, HP, Lenovo, et al with respect to keeping the desktop PC relevant? Put another way: I find it astonishing that several years have been allowed to pass since desktop performance became “good enough.” The key is disrupting what people think is good enough.
The average consumer desktop and business desktop user does consider their desktop’s performance to be good enough. But this is an artifact of the manufacturers failing to give consumers anything to lust for.
Opinions may vary, but I strongly believe that the major failure for desktop PCs in the past five years has been the display. I use three monitors--two 30” and one 24”--and I want more. I want a 60” desktop display with 200dpi resolution. I would pay dearly for such a display. I want Avatar/Minority Report style UIs (well, a realistic and practical gesture-based UI, but these science-fiction films provided a vision that most people will relate to).
I can’t even conceive of how frustrating it is to use a desktop PC with a single monitor, especially something small and low-resolution like a 24 inch 1920x1080 monitor. And yet, most users would consider 24” 1920x1080 to be large and “high definition,” or in other words, “good enough.”
That’s the problem, though. As long as users continue to conceive of the desktop in such constrained ways, it seems like a dead-end. You only need so much CPU and GPU horsepower to display 2D Office documents at such a low resolution.
There was a great picture CNet had in one of their reports (and I grabbed a copy at my blog [1]) showing a user holding and using a tablet while sitting at a desktop PC.
In the photo, the PC has two small monitors and is probably considered good enough to get work done. But the user finds the tablet more productive. This user should be excused for the seemingly inefficient use of resources because it’s probably not actually inefficient at all. The tablet is probably easier to read (crisper, brighter display) and faster, or at least feels faster than the PC simply because it’s newer.
Had desktop displays innovated for the past decade, the PC would need to be upgraded. Its CPU, GPU, memory, and most likely disk capacity and network would need to be beefier to drive a large, high-resolution display.
So again, what are the PC manufacturers doing to disrupt users’ notions of “good enough,” to make users WANT to upgrade their desktops? I say the display is the key.
In general I agree with your ideas, but I think there's a missing component: general use case. Most people only need their computers for email and web browsing. We can further simplify that and just call it - communicating.
Sure, a developer or designer salivates at the idea of more screen real estate, but that's because there's a practical use for it. PC Manufacturers follow, not decide, the needs of their users.
I love my screen real estate because I actually need it. If I just browsed Facebook, wrote a word document, and maybe planned out my finances with a spreadsheet - I'd have a hard time justifying some giant monolith of a monitor. I think this is a largely overlooked factor in the success of the mobile market.
I don't know if everyone's forgot this already but when the iPad first came out, most people were thinking: what in the actual eff is Apple thinking. Sure we all knew it'd sell because, well, Apple is Apple. But if I recall correctly most people were scratching their heads asking, "so, it's a big iPhone right?"
And guess what? It is just a big iPhone! And it succeeded NOT because it was the next "cool" thing but because it was designed to do what most people needed their computer to do, namely, send pictures of their grandkids to each other.
Once the monitor is so large that you need to move your head to view it all, you have gone too large. Almost no one wants to stare at a wall sized monitor from 2-3 feet away.
PC sales, even with the release of Windows 8, did drop 21% compared to one year ago...
OK. But would it even be remotely possible to consider that year-to-year China is in recession, a lot of european countries are in recession and U.S. is not in a great position (e.g. the manufacturing sector is firing people left and right), Japan is in a terrible situation, etc. and that this may be playing a role on the number of PCs sold?
Year-to-year sales of cars in France has gone down by 20%.
When people enter a recession they tend to try to save money: cars and PCs are expensive things. Smartphones not so much (especially with all the "plans" luring people who cannot count in).
I think that smartphones and tablet did play a role in the "minus 21%" that TFA mentions but I'm also certain that the worldwide recession is playing a role too. People don't afford what they see as "expensive" that easily.
$100 + five-years-unlimited-plan-and-i-can-rape-your-children, they don't pay that much attention and so smartphones tend to be more "recession proof".
A quick sunspider test with a US Samsung Galaxy S3 1.5GHZ snapdragon on Jelly Bean's likely highly-optimized browser shows performance very comparable to a first generation intel 1.66GHZ Atom 230 single core on the latest Firefox. Granted it's a mostly single-threaded test anyway but the ARM has both cores available and the test is pretty cpu-bound after it starts.
I'd estimate the latest i7 is at least 3x faster per-GHZ on this lightweight but fairly general (cpu-wise) test.
For heavy lifting, a recent i7 with it's cache size, memory bandwidth and accompanying i/o would probably compare to an ARM that is running at about 5x the clock speed.
I don't think that ARM can be suddenly declared the best at anything other than maybe performance-per-TDP.
Performance-per-cycle is the more difficult problem to solve... ask AMD how hard that's been since the original Intel core series appeared on the scene in 2006. Prior to this and after it wasn't just a chip clone maker, AMD dominated this metric.