That headline is mind-bogglingly stupid. Cheap, powerful chips as a refutation of Moore's law? Buh-what?
Of course, it starts off by an entirely-fabricated misquote of Moore's law. Wikipedia's article strikes me as pretty good and there are a number of reasonably good ways of looking at it, including component density over time, or price-per-transistor. Note that neither "raw performance per CPU" nor "MHz" show up in any formulation; nothing in Moore's law says whether you need to take your X nanometer process and make a hundred Opterons or a thousand ARMs per silicon wafer.
"Last month, a Motley Fool blogger suggested that Apple might drop the ARM processor in the next version of the iPhone, either for the next version of Intel's Atom, codenamed Moorestown, or if Apple build its own chip using the technology from its acquisition of low-power chip maker P.A. Semi Inc. last year."
Apple switching the iPhone to Atom is highly unlikely. They'd have to either scrap their entire library of software (running it under emulation would be unusably slow), and they'd probably end up losing out on power consumption anyway.
Building their own ARM is totally possible, though.
Apple has more than a little experience with moving between CPU architectures, and with distributing multi-architecture binaries. Plus, they control the application distribution channel. I doubt they'd have too much trouble switching. Well, other than the likelyhood that ARM is going to hold a performance per miliwatt edge on Atom for a while still.
ARM is so small Intel could license ARM's IP and build an ARM core inside an Atom processor, sacrificing a tiny little bit of the cache memory. If the ARM core could use some logic on the Atom side, the sacrifice would be even smaller.
I can't believe they think a Windows version is pre-requisite to success in penetrating the server market
Taking websites as an example : The clear leader amongst web servers used by the million busiest websites is Apache with a 66% share. It has a 47% lead over its closest competitor, Microsoft-IIS, much greater than on the web as a whole.
Not to mention, an ARM edition of Windows wouldn't be much good without either (1) a binary translator or (2) everyone rewriting their software to run on ARM. The OS does not exist in a vacuum.
or (3) everyone making source available? Could a more heterogeneous processor market drive people towards open source purely from the practical aspect of being able to built the right version to run on their kit?
Not as an internet server, but many companies can't get rid of x86 Windows servers because they have (better: think they have) to run Windows applications like Exchange or SQL Server.
Is the 4 core limit really an issue? Really big deployments, where they are most sensitive to performance per watt, are finding it more cost effective to go with fewer cores per server because it gives a more optimal balance between processing power and memory bandwidth. Look at those google compute containers, each node has pretty modest specs.
Of course, it starts off by an entirely-fabricated misquote of Moore's law. Wikipedia's article strikes me as pretty good and there are a number of reasonably good ways of looking at it, including component density over time, or price-per-transistor. Note that neither "raw performance per CPU" nor "MHz" show up in any formulation; nothing in Moore's law says whether you need to take your X nanometer process and make a hundred Opterons or a thousand ARMs per silicon wafer.