The acquisition of NeXT really gave Apple a lot of flexibility. Even in the 1990s, the NEXTSTEP OS ran on M68K, Sparc, and x86. Apps could be compiled to "tri-fat" binaries so one binary distribution worked on all platforms.
I have no doubt that Apple had Mac OS X running on Intel from day 1, as well as PowerPC. That legacy gives them a lot of flexibility in processor choices.
Rhapsody's (OSX's codename) first two developer releases were available for X86. It wasn't until OSX Server 1.0 that Apple only publicly released PowerPC versions of OSX.
I miss YellowBox (OpenSTEP, NeXTStep, Cocoa whatever you call it for Windows). I still have installed on my Windows XP from the last Objective-C based WebObjects - 4.5.2, but can't redistribute apps with it.
It came with built in Sampler, one was able to debug with Visual Studio the compiled executables (there was a converter that took the dwarf debug info and converted it CV (CodeView) compatible one for Visual Studio).
The apps looked natively on Windows (unlike GNUStep), much like Cocotron, but you were able to develop them on the system.
That to be said, Apple's OS and the underlying Foundation/AppKit/UIKit are truly hameleonic - I'm sure they can migrate easily to new platforms.
Quad fat, NeXTstep also ran on HP-PARISC. In fact an HP 712/100 was the nicest NeXT system I have ever used. Blazingly fast (even in 2010!) easy to use and dead silent.
Quick rule of thumb: if it's on AppleInsider, it's probably speculative bullshit. Doubly so if it's written by Daniel Eran Dilger, whose articles usually are in the following form:
Recent news item
+ A bunch of tangentially related, usually revisionist details about Apple's history
+ Some tangentially related, laughably biased, usually revisionist shots at Microsoft/Google
+ (Sometimes) An insane, impossible-to-read graph/chart
= Unsubstantiated conclusion that reads like a wish list for what Daniel wants Apple to do, not what the evidence shows.
Total red herring. Sure, I've read some of his articles in the past, and I even skimmed this one to make sure I wasn't wrong, but that doesn't change a thing about the substance of them.
Why would you want that? Handwriting is slow, certainly slower than the keyboard iOS provides. I’m honestly puzzled why handwriting was ever a favored input method.
You are right about speed. However, I would like to have handwriting simply for symbols and drawing (i.e. mindmapping). How are you going to enter math formulas faster with a keyboard than a pen? Same goes for marking and annotating things. IMO, precision and speed for marking/annotating is pretty bad with iOS.
But I don't see how apps like Omnifocus can make use of it.
You'd need a thin stylus for that, and Apple doesn't ship one which means they're not going to built it into the OS.
But surely it wouldn't be too hard (where too hard means technically infeasible) to create an application based on hand-written input (Graffiti or Rosetta/Inkwell-type technologies).
While writing might be a nice additional feature, I can certainly understand why Apple didn’t build iOS around it. (In the current climate this probably also means that handwriting won’t come to iOS anytime soon.)
I think we can thank RIM for that. While they were not the first, they arguably made the QWERTY keyboard ubiquitous and (for the first time) more than tolerable on a phone sized device.
iOS has had a handwriting-based keyboard for Chinese since iPhoneOS 2.0.
But I'm guessing drawing letterforms with your finger isn't a very enjoyable input method, and most of the iPad styluses available aim at replicating a finger (since that's what most applications expect).
Actually for drawing Chinese characters it's not that bad. I plan to get an iPad for my parents so they can draw Chinese characters instead of learning how to type (there are many older generations in China who cannot type Chinese on a computer because they don't know how to do that).
I just wish the character recognition could be better.
Oh yes, sorry for the lack of precision, I meant "for western-european languages" (or more precisely for languages with a fairly low number of graphemes), for Chinese or for Japanese Kanji it makes a lot of sense to ask for tracing as providing keyboard systems with good/easy access to thousands of graphemes is not easy.
Though I believe Japanese have a system for such a thing, which they use on regular QWERTY keyboard, my sister demonstrated it to me (you basically type phonetics and it builds graphemes/characters on the fly, something like that). For that reason (and because japanese people are used to that kind of input) iOS has a qwerty (romaji) and a 10key japanese keyboards but I don't think it has a handwritten one (it has 8 different Chinese keyboards, 3 simplified [with "handwriting" and "stroke" versions] and 3 Traditional [Handwriting, pinyin, zhuyin, cangjie and stroke]).
Not part of the OS per se. Part of the platform, yes, but there were plenty of apps that never used handwriting recognition.
[Near as I can tell, the only association with Newton is the use of an ARM. I sure wouldn't use NewtOS on anything modern, even as cool as NewtOS was in its day.]
It won't stop here. Apple will put ARM in their laptops, and they will help usher in a new era of more power efficient computing in a way that x86 simply cannot, in whatever shape. Just wait and see.
I don't think that ARM is as clear a win in high performance areas as you seem to think. ARM does let you get away with not having most of the decode logic that an x86 has to carry around, but thats pretty small penuts compared to out-or-order logic with a decent reorder window size, or a wide execution cluster with the associated bypass network.
And while instruction predication does let you avoid a lot of branching nearly for free in the ARM architecture it makes out of order logic much less effective because now there are a lot more dependencies between instructions. That was the reason the Alpha team decided not to use predication, even though they were familiar with the idea. Its also why the Itanium, an in order architecture, used predication.
Yep, taking a look at a modern x86 core in 32nm (and 22nm is just around the corner), the x86 decoding is now just a tiny part of the core. Intel has been been picking low-hanging fruit and not trying particularly hard to make things very low power. The current area of work is GPU integration with CPU on the same die (see Sandy Bridge).
Consumer electronics is driven by volume & price, so all we are really seeing is small, low power cores becoming acceptably fast for mainstream computing.
No, he detailed specific differences in the branch predictors of the two mentioned architectures. Branch prediction is part of the microarchitecture of a CPU, not its ISA. These two should not be confused.
As the other commenter pointed out, I'm talking about instruction predication (also called conditional execution) instead of branch prediction. In any ISA you have branches that might or might not execute depending on some conditions. With ARM, the first four bits of every single instruction - not just branches but loads and stores, addition and subtraction, etc - are used to specify under what conditions the instruction will be executed - so you can have small 'if' statements that don't actually need any branches when they're compiled.
All that is really pretty awesome if you're designing your processor to go in order, but if you want to try out of order execution it becomes more complicated. Now you have to check every instruction to see if its predicated, and if it is you now have a new dependency on the previous arithmetic instruction even if there isn't any data dependency between the two. Of course you could argue that if you find a predicated instruction in ARM code then its replacing a branch that would be in x86 code, and that overall your job is no harder. And you could go back and forth arguing over it.
The important thing to remember, though, is the things that give the ARM ISA inherent advantages when you're making low power, in order processors aren't necessarily advantages when you're talking about high performance, high power processors.
If the inefficiency of the x86 ISA was a killer, we'd all be using Alpha, PowerPC, SPARC, PA-RISC, or Itanium processors today. The basic argument you're giving is about 20 years old by now and the market has seen it tested.
ARM claims their new Cortex-A15 will provide five times the performance of existing smart phones. So the resources and interest are definitely out there.
Probably just as important for a MacBook replacement: "The introduction of Large Physical Address Extensions (LPAE) enables the processor to access up to 1TB of memory."
A typical improvements for microarchitecture is a factor of 2, spread between different units. For example, we could speed up full word adder (most common roadblock for higher clock frequency in current CPU's) by 10 percents, add scoreboard that allow us to issue 1.3 instructions in average load, add bypass logic that allows us to cut delay by 1/5 (5 cycles for pipeline) and we get 1.11.31.2=1.72 of our previous speed.
I assume that ARM already have all those fancy bypasses and scoreboards. So how would they get that 5 times speed up?
I think it will be true for some special cases, like Javascript interpreter or video decoding. I bet on JS.
It is hard to speed up a mature architecture by five times.
Trying to parse their marketing material the best I can, the 5x refers to dual-core configuration clocked at 1.5GHz. More than 8 cores are also available and with clocking from 1 to 2.5 GHz.
Are CPUs even that much of a laptop's power budget these days? I know that the transition from CCFL to LED backlighting helped, but I think displays are still the biggest power draw. If you look at an Atom system, the chip with the biggest heatsink isn't the CPU, it's the chipset (or the GPU if it's discrete).
It seems to me that in order for an ARM processor to be viable for something like the MacBook Air (which is already using all the easy power-reduction measures like LED backlights and SSDs), the ARM chip will have to be twice as fast and draw half as much power as an Atom.
Being able to run Windows is a key competitive advantage for Macs now, and Windows (due to the problems of application support) will likely never run on ARM. Hence, neither will Macs.
x86 has always been predicted to fall behind competing architectures, and it's always kept up--at least for personal computers--because of the large vested interest in keeping all that x86 code running. History is littered with better-architected CPUs that couldn't beat x86. ARM survived because ARM is an embedded processor, and PPC survives as an embedded processor, but Apple's been down the road of trying to shoehorn an embedded processor design into Macs before, and ended up migrating to x86.
How will this affect BootCamp? Won't it put a stop to the ability to run Windows natively?
One of the factors that help people move across to OSX seems to be the knowledge that they can still install Windows if they need/desire. Perhaps Apple will buy Parallels and include it as part of the OS (although VMWare will have a case if they see this as being anti-competitive)?
I have no doubt that Apple had Mac OS X running on Intel from day 1, as well as PowerPC. That legacy gives them a lot of flexibility in processor choices.