Hacker News new | past | comments | ask | show | jobs | submit login

I completely agree with performance per watt Intel is on the path to be where they need to be but ARM is on path to be par with a chip that even AMD's x86 offerings can't touch on a core by core basis. Now with Microsoft's Windows 8 trying it's very best to be a tablet experience it seems by sales figures people much rather prefer an Android or Apple tablet compared to even i5/A10 laptops with Windows 8. The technicalities you present are right on the money but what we have here is a perfect storm. Microsoft wants to be a tablet OS manufacturer and is skirting the traditional desktop/laptop experience leaving Intel with no real face value to their contribution.

When people go to their local stores and see rows of tablets that look like tablets and rows of laptops that look like tablets and rows of desktops that look like tablets, well they just seem to get actual tablets. Sure an Intel i3 will handily beat out the upper echelon of ARM offerings but with Android and IOS be entirely optimized for this experience we are finally realizing what AMD fans have been shouting for decades, the extra horsepower really does not come into effect often enough to make it a deal breaker. Your web page may load 40% slower on an ARM rig but when the Intel model will load it in 1.5s and you tablet will load it in 2s we now experience the Law of Diminishing Returns. If a dual core ARM A15 can consistently run at around 40-50% of the speed of an i3 mobile processor, a quad core ARM should settle at around 60-75% while being tasked to do much, much less.

With Intel and ARM you are also dealing with 2 very different ecosystems. x86 has had to be fast because most of the applications you will run on a daily basis are likely not really optimized, profiled or threaded anything above a couple compiler switches. They have to be fast because the code is so slow. Now with Android and IOS the language, libraries and sand-boxing improves the underlying mechanisms to the degree that most of the code that matters is optimized by Apple and Google where the equivalent Microsoft Windows libraries are not as optimized and in many cases so specialized that it gives a look and feel of a Wordpad type app rather than what you are really after.

Basically I feel by looking strictly at Intel's raw performance as to the reason why a platform will succeed is improper and an unfair comparison. People are moving en-mass to tablet and smart phone ecosystems not because they are faster or run by a particular processor but because it feels like a custom solution and the integration overall is acceptable. So I don't think it is ARM v Intel but rather tablet v notebook and laptop. If Microsoft keeps up with making their desktop OS look and feel like a tablet OS people buy tablets and Windows 8 doesn't have the applications, word of mouth or market penetration to make that possible right now.

If things don't change and quick we may just have a Microsoft and Intel "Double clutch"

"Double clutch: this is where a non-swimmer ends up in deep water and, out of panic, grabs the closest person to them to stay afloat." http://www.firstaid-cpr.net/lifeguarding/minor_major.html




>x86 has had to be fast because most of the applications you will run on a daily basis are likely not really optimized, profiled or threaded anything above a couple compiler switches. They have to be fast because the code is so slow. Now with Android and IOS the language, libraries and sand-boxing improves the underlying mechanisms to the degree that most of the code that matters is optimized by Apple and Google where the equivalent Microsoft Windows libraries are not as optimized and in many cases so specialized that it gives a look and feel of a Wordpad type app rather than what you are really after.

This is so wrong, I actually don't know where to begin.

1. It's true that most code isn't optimized for x86. But most code isn't optimized, period. Optimization is freaking hard. Android and iOS aren't necessarily better optimized than Windows. And Linux, especially RHEL, is screaming fast on the new Intel chips. Windows isn't that terrible, either.

2. Sandboxing actually hurts performance, because it requires an additional layer between the OS and userland to make sure that the code the user is executing is correct.

3. None of this actually matters for chip architecture, since #1 is true of code in general, and #2 doesn't have any special architecture-based support.

4. x86 isn't just Windows. It's Linux, too.


Let me clarify.

When you program for Android/IOS how much of the logic you write is referenced from optimized libraries and how much is your own craft? Now look at the entire Market Place / App Store and figure how many of those apps entirely rely on Android / IOS optimized libraries.

While it may be true that some may wander off the beaten path and write their Apps in OpenGL ES directly and maybe even C/C++ most rely on frameworks and libraries already built in (which are indeed optimized).


You're still running on the blind assumption that Apple and Google are magically better at optimization than everyone else. I would not make that assumption, because, as I said before, optimization is hard. There's a billion variables that goes into your code's performance, and tiny changes can completely ruin performance-or make it awesome.

Look at Android(as an example). For the first few years of existence, the OS was plagued with issues of bad battery life due to poorly optimized code and bugs. Android 3.0 was basically scrapped as an OS due to bad performance.

iOS 5.0 had absolutely horrible battery life due to a bug. The Nitro Javascript engine isn't available outside of Safari, too.


So you are suggesting that it is out of reach for Google and Apple to publish libraries and frameworks that are optimized for a specific hardware platform (ARM A7 + A15) or is it that since Windows X already optimizes Intel / AMD offerings to an overwhelming extent that the differential would be moot?


It's out of reach for anyone - Google, Apple or Microsoft - to optimize their libraries enough to make up for CPU performance differences.


The only data point I can think of is this JSON benchmark:

http://soff.es/updated-iphone-json-benchmarks

...where Apple beats three community-developed libraries. But maybe it just wasn't that hard to beat them.

I'm also skeptical because I don't see how Apple has any incentive to optimise code ever. Their devices (ARM & x86) are doubling in CPU power left and right while the UX basically stays the same. The second-to-last generation inevitably feels sluggish on the current OS version...which just happens to be the time when people usually buy their next Apple device. Why should they make their codebase harder to maintain in that environment?


Look at https://github.com/johnezang/JSONKit, which is fast.


>...where Apple beats three community-developed libraries.

That's just in one very restricted area (JSON parsing) where there are TONS of third-party libraries of varying quality for the exact same thing. Doesn't mean much in the big picture.

>I'm also skeptical because I don't see how Apple has any incentive to optimise code ever.

And yet, they use to do it all the time in OS X, replacing bad performing components with better ones. From 10.1 on, each release actually had better performance on the SAME hardware, until Snow Leopard at least. They had hit a plateau there I guess where all the low hanging fruit optimisations were already made.

Still, it makes sense to optimise severely, if not for anything else to boast better battery life.


> Still, it makes sense to optimise severely, if not for anything else to boast better battery life.

No doubt about 10.0-10.5/10.6. But that seems to have been an afterthought for the last two OS X releases:

http://www.macobserver.com/tmo/article/os-x-battery-life-ana...

And has there ever been an iOS update that has made things faster on the same hardware?

I don't think that Apple is intentionally making things slower, which is what I'm trying to say with the JSON parser (it is easy to write a wasteful implementation). But in the big picture, they're not optimising much either.


How is this any different on Android/iOS than it is on Windows/Linux/Mac?


>"When you program for Android/IOS how much of the logic you write is referenced from optimized libraries and how much is your own craft?"

It doesn't matter either way.

For one, Apple and Google aren't that keen on optimising their stuff either.

Second, most desktop applications use libraries and GUI toolkits by a major source, like Apple and MS, so the situation regarding "a large part of the app is made by a third party that can optimise it" is there for those too.

Third, tons of iOS/Android apps use third party frameworks, like Corona, MonoTouch, Titanium, Unity, etc etc, and not the core iOS/Android framework.

Fourth, the most speed critical parts of an app are generally the dedicated stuff it does, and not the generic iOS/Android provided infrastructure code it uses.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: