And that is that there is no serious competition for the hard combination of serious good taste, obsession with details that seem negligible but ultimately matter a lot, etc., embodied in Apple's recent products.
I know this sounds like raving Apple fanboy stuff, but I do think it's rarely acknowledged explicitly; it's either assumed by those in the know, or ignored by those not.
For example, I would never in a million years want to develop Android software; why? Because it's written in a language I loathe, and the API is workmanlike but completely uninspired (disclaimer: I've never written any code for it). In contrast, the Cocoa Touch APIs are delightful, and MIT-style (vs. New Jersey) at most levels. Perfect? No. Amazingly crafted? Yes.
I claim the secret sauce to their APIs is actually a character at Apple named Ali Ozer who, along with his crew, has maintained a firm hand on the tiller, and learned a huge amount about object-oriented design for interactive systems over the past couple of decades.
iPhone OS is the result of throwing out a lot of the cruft and starting over, but keeping the hard-earned knowledge. (Core Animation being one of the key underlying technologies for iPhone OS that found its way back to the desktop, but only partially due to all the cruft in normal desktop Cocoa.)
And Apple works hard at the silicon-to-UI whole-stack integration and performance tuning, something that other vendors famously haven't been able to do (because they don't control the whole stack), or haven't wanted to do. Microsoft has recently figured this out, according to Ballmer, and intends to do the same.
I could go on, but I'm sure I'll get downmodded to death already. ;-)
The nice thing about Android is you don't have to use Java. Many other languages target the JVM, which is often trivial to convert to Dalvik (it depends on the language). Right now I'm working on an entire app in Duby (a Ruby-like language with static typing and type inference, with no baggage; reaches native Java speed easily), and loving it. All I had to do was change my compile command to dubyc and I was ready to go. JRuby is in the works, and apparently Scala is possible, and I'm sure many others are working on it too.
I've never written an iPhone app, and this is my first Android app, but I find the API and the SDK to be perfectly easy to work with; it was very easy to get started and quickly start prototyping. I'd like to know what parts of the Cocoa APIs you consider delightful though, just out of curiosity.
What makes Cocoa and Cocoa Touch really nice isn't that parts of them are delightful, it's that all of the parts work really well together. They were written to take advantage of Objective-C and the tools were built for using them. Interface builder makes UIs really easy and CoreData makes persistence almost automatic. You should try building an iPhone app and see how it compares.
I would try it out, but right now my iPhone is on Craigslist and a Nexus One is in the mail. :) I run Linux so that wouldn't be very easy to do anyway.
Re: Interface Builder, I agree, but I don't think anything like that would work well for Android. It targets all kinds of devices and screen sizes; it really has to be flexible and that'd be harder to do with a WYSIWYG UI tool. That being said, someone made one for Android, but I still prefer coding it by hand: http://droiddraw.com/ (it ain't the prettiest thing but it seems to work; you can even load up your own layouts).
I don't question the rest, and I simply don't know enough about the iPhone APIs to compare it to Android, but I will say this: everything certainly works together in the Android APIs. I'm not 100% sure what the purpose of CoreData is - preferences? generic data storage? - but they are both part of Android as well.
Funny that you mentioned everything working together (albeit in a different context); that's the biggest difference I noticed between how iPhone apps work and how Android apps work. See Activities (Android) vs. Apps (iPhone), Content Providers, Intents -- just about everything about Android apps is designed around working together. For example, in every iPhone app I've used that has a browser in it, it's always essentially a WebKit view with some primitive controls, or it says "Halt app and switch to Safari?". In Android most apps open up the native Browser app's main activity, and hitting Back will close it and go right back to where you were. It's not really based on multitasking or switching apps, it just reuses the Browser's Activity to accomplish the same task. Applications are all on the same level, even built-in ones, and can all work together beautifully by simply calling Activities of other apps for performing specific tasks, or controlling them through Intents, etc. iPhone apps tend to be much more isolated.
You know, interface builder existed before the iPhone. Developers used it to build out interfaces for Mac applications which could run on devices with a big variety of screen sizes. Interface builder is screen size agnostic for the most part. When apple first released 3rd party app support, their iphone sdk didn't even support interface builder.
Interface Builder existed way back in the early 90s, and someone who used it back then will find the modern version fairly familiar. It's been around quite a bit longer than the iPhone.
You should try building an iPhone app and see how it compares.
I have, and Android wins for me. Cocoa Touch is a decent API (in most cases; what's with NSImage vs CGImage vs CIImage?), but Objective-C is seriously outdated. It usually needs more boilerplate than even Java, and manual memory management, lack of namespaces, and header files are silly in 2010.
whatever. Objective-C is far closer to the elegant dynamism of SmallTalk than Java.
> manual memory management
Makes sense. This is a resource constrained device.
> lack of namespaces
This is a phone not an enterprise server.
> header files
On a device with an underpowered processor the focus on C-based languaged has been paying back serious dividends as far as the responsiveness of the platform's applications.
Well ObjC straddles a very interesting place. It's as low-level as C++ but has high level message passing method dispatch as well. If it seems a little crufty, it's because it's living in a very peculiar place trying to serve a variety of masters.
Hopefully 4.0 will start to pave the way for the ObjC2.0 stuff to come to iPhone. They are really big improvements to the language.
Also, rule-based syntax translators could give you a ton of stuff as a preprocessing pass to your code. Write something for NSArray literals and you'll be a hero.
To be pedantic, "Objective-C 2.0" features are things the iPhone always had, like properties and fast enumeration, and/or had first, like the modern runtime (which solves the fragile base class problem, and which Mac apps still only get if they're 64-bit).
Blocks were a Snow Leopard feature, implemented as a C extension, and sprinkled throughout Apple's APIs; no direct relationship to Obj-C.
With the first iPad model still at 256 MB RAM, and no sign of GC in the iPhone OS 4 preview, memory management will be with the platform for a long time. I think we'll see multi-core first.
Yes, something to replace the super crufty NSArray / NSMutableArray / NSSet / NSDictionary, syntax would be really good. Small talk message passing was meant to have short, small one symbol binary operators, but unfortunately that elegance is not there in objective c.
To be honest with you [[stuff objectAtIndex:5] objectForKey:@"name"] is less readable than stuff[5][@"name"]. And you type this kind of code constantly over and over again, it can really stack up and create something that would be simple to read in other languages.
Except that if you're used to Smalltalk, this is just standard fare.
I agree that a Python-like shorthand would be great at the object level, but then you'd have a mixture of shorthand and longhand, and that would be ugly.
You're right, they couldn't tell. But that doesn't mean they're not disallowed -- unless you're writing C, C++, Obj-C, or Javascript, it's disallowed, period. But that doesn't mean people won't do it.
I'm not sure I agree that code translation is entirely disallowed.
Part of the problem, though, is that the legalese in the iphone 4 revision is maddeningly obscure and unhelpful. We're not even sure if embedded Lua interpreters are really banned. I'm sure Apple is considering that very question.
Assuming we're only talking about the iPhone APIs, then there's no NSImage or CIImage (Mac OS X desktop only).
As for UIImage and CGImageRef, which are on the iPhone, UIImage is an Objective-C class, whereas CGImageRef is a C opaque struct.
CGImageRef comes from the CoreGraphics (aka Quartz) lower-level 2D drawing APIs (implemented using C functions). This framework was around before the iPhone as it is also found in the Mac OS X APIs. UIImage gives you the ability to use a CGImageRef as an object. It doesn't expose all of the functionality found at the CoreGraphics layer, but it can be easier to use with other ObjC classes.
You're right, I was thinking of desktop Mac OS X there, and an especially annoying section of code where I was trying to get those three image APIs to talk to each other. Apple did in fact remove a lot of cruft and duplication in the iPhone API, and that's good. Still, there are a number of cases like that where you have to switch between ObjC method calls and straight C functions. Not a huge deal, but it's clearly not the SmallTalk-style pure OO.
Scala and Clojure run like molasses on Dalvik and JRuby will likely be the same. The Dalvik VM that Android targets is not like the JVM and one of its deficiencies is that it supports dynamic languages very poorly. Duby may run fine on JVM but if it's really Ruby-like, it'll be bad on Android.
No. If you want to write professional-quality apps on Dalvik, you'll be writing in Java or another language just as weak in imagination and straitjacketed.
Which is not to say that Objective-C is an improvement. At least iPhone apps used to offer Lua or even Scheme scripting. No longer.
Duby has been absolutely great on Dalvik so far. Like I said, Duby is not a dynamic language, it just provides one of its main benefits through type inference and a drastically less verbose syntax. I translated the same code from Java to Duby (reducing its size and making it a whole lot more fun to work with) and noticed absolutely no drawbacks. That's not to say there weren't any speed decreases, but if there were they were too small for me or anyone to care about.
Do you have any benchmarks or references for Scala and Clojure's speed on Dalvik? I haven't been able to find much. (That's curiosity, not snootiness.)
Scala is more statically typed then Java. It's not a dynamic language, type inference (what makes it look like Ruby to some) is done by the compiler. Modern (i.e., based on concepts dating back to late 70s and 80s instead of 60s and early 70s) statically typed languages don't have to the have the verbosity of Java and C++: see F#, OCaml and Haskell.
It does use reflection for certain types of pattern matching (only similarity with the JVM dynamic languages), but I'd expect most of the issues (note: I haven't tried myself) are due to the Scala compiler specifically targeting HotSpot.
It doesn't matter how static Scala or any other language is, all that matters is how well its type system fits into the target VM, and Java VMs are optimized for Java.
When the type-system doesn't fit you have to resort to workarounds, like allocating more objects than you should, or using introspection, or generating bytecode at runtime.
The JVM does fine here, but Dalvik is a VM created for a restricted environment.
That said, I don't think static languages should have problems ... as long as they don't stray too far from Java ... like being lazy or having type-classes.
> Modern ... statically typed languages don't have to the have the verbosity of Java and C++
They don't have subtype-polymorphism either (basically OOP).
Since you mentioned OCaml and F#, you should take notice that the OOP subset doesn't support type inference when you're dealing with interfaces, that's because it is not possible. OOP was designed for dynamic type-systems, and it mixes with static typing like oil and water.
Also Scala's type inference is really restrictive and should't be in the same league as any other language with a Hindley-Milner type inference system.
The type inference is really useful when you're dealing with parametric polymorphism. Scala doesn't really have that either ... generics only save you from explicit type-casts, nothing else, and are a far cry from Haskell's type-classes or C++'s templates (which are late-bound). The recently introduced Manifests or the implicit conversions save you in many cases, but those are just ugly hacks that only solve certain use-cases.
I really don't get why Scala is so popular. It's as if people are so tired of Java that they are looking for a way out, willing to compromise just for some syntactic sugar.
It's interesting to note that none of the issues of subtype-polymorphism (tight coupling) or those of parametric polymorphism (hard to get right) are of importance to dynamic languages. In a dynamic language polymorphism is simply implicit.
That's why I hope people will invest in VMs that support dynamic languages in the future ... static typing is a bitch to get right, and it would be easier if the VM wouldn't impose a strict type-system on you.
First, this is an excellent comment and should have a much higher score than my fanboyish original :-)
I agree that Scala is restricted by the JVM (and the aim of full compatibility with Java) and as a result type inference, pattern matching and more suffer heavily. F#, OCaml and Haskell are better examples of modern type systems as they're less encumbered. I am fairly curious about how F# works around the CLR (or rather, how much CLR accommodates type systems different from C#'s), guess it's up to me to RTFM.
> It's as if people are so tired of Java that they are looking for a way out, willing to compromise just for some syntactic sugar.
I think this hits the nail on the head, but I don't see it as a bad thing. If there's a very strong reason to be on the JVM (e.g., other projects or libraries, operations preferences), it's nice to have an option that lets one have a more productive and enjoyable experience.
There are times where it feels like a big compiler hack (which it is), but first class functions (even if they're compiled down to anonymous classes), closures, optional lazy evaluation, type inference, limited pattern matching, case classes, traits/mixins, "encouraged" immutability, the collections, etc... add-up.
There are occasionally bugs and performance issues with compiler and the collections library, but overall I don't see what's compromised compared to programming in Java itself. I'd argue syntactic sugar classification applies more to Java 7's planned closures and first class methods.
> OOP was designed for dynamic type-systems
Yes, that's literally true. OOP feels natural in dynamically typed languages, even when it's bolted on (CLOS in Common Lisp, Perl 5).
It's still a mystery to me why Java "won over", given that Smalltalk had (at the time) better performing virtual machines (compared to older JVMs which were superceeded by HotSpot, which was originally used by Strongtalk) and great tooling. There are known examples of teams of inexperienced programmers working under guidance to produce large, well working projects in Smalltalk.
> That's why I hope people will invest in VMs that support dynamic languages in the future ... static typing is a bitch to get right, and it would be easier if the VM wouldn't impose a strict type-system on you.
Clojure, in my view, suffers the most from the type system that the JVM imposes. It would be interesting to see if Clojure and Scala would eventually target less restrictive VMs (reverting to compiler hacks on the JVM to functionally emulate these VMs).
It's still a mystery to me why Java "won over", given that Smalltalk had (at the time) better performing virtual machines (compared to older JVMs which were superceeded by HotSpot, which was originally used by Strongtalk) and great tooling.
It's very simple. Smalltalk was expensive. Students, hobbyists, people working at cheap corporations, startups, open source projects, and the like couldn't write, share, and distribute working systems on it.
I understand there were some good vendors with steep student discounts and the like. My roommate loved some of them. But he could not hack with friends for free (or even very cheap) on it.
And that little bit of money makes all the difference. Not that it would have taken off otherwise; there are many factors, but no language or VM that charges for access has taken off in a generation except where access to hardware is strictly limited by law (smartphones) or expense (FPGA, large scale microcontroller projects).
It's very simple. Smalltalk was expensive. Students, hobbyists, people working at cheap corporations, startups, open source projects, and the like couldn't write, share, and distribute working systems on it.
In fact, it was the deliberate strategy to become a "boutique" language, a secret weapon of the Fortune 500. The Smalltalk companies missed out on "The Bazaar" and the mindshare benefits of an open community.
I doubt this. Any language could compile to Java, but that doesn't mean the performance will be the same. Technically, every language compiles down to assembly, but only apps written in assembly can compare in speed usually.
"I would never in a million years want to develop Android software; why? Because it's written in a language I loathe, and the API is workmanlike but completely uninspired (disclaimer: I've never written any code for it). "
i.e. I'm making a judgement on something I'm ignorant about and never made an effort to know.
I've done Android stuff for a year or so and vastly prefer Cocoa Touch. However the OP is wrong, Android is inspired, they are inspired to use idiotic API names; spinner (rather than picker, chooser, or dropdown), toast, intent, etc.
What is a picker or chooser? I know exactly what a spinner is: it's named for a spinning number control (like a mechanical odometer with a knob), unless they're using "spinner" to describe something completely different from what a spinner is in desktop UI parlance.
My point is that the API decisions are probably based on experience with a different developer culture, in which those words make perfect sense and "picker" and "chooser" are completely vague. Different groups have different terminology.
I agree the odometer-style widget makes sense as a spinner, this [1] doesn't in my opinion. First choice would probably be popup menu or drop down possibly, then picker and chooser.
> I claim the secret sauce to their APIs is actually a character at Apple named Ali Ozer who, along with his crew, has maintained a firm hand on the tiller, and learned a huge amount about object-oriented design for interactive systems over the past couple of decades.
I agree with this. One of the things that makes most Apple APIs great is that they are not part of the Taligent legacy that so infects other languages. They keep class hierarchies relatively flat and lean heavily on the relatively inexpensive delegation that ObjC offers. It's very refreshing and feels far more smalltalk-ish than Java-ish.
Apple designs a product knowing it will sell millions of copies, so it makes sense for them to spend the time to get it right. PC manufacturers have greater volume, but no individual product is anywhere near as ubiquitous as a Macbook Pro or iMac. They're stuck in a trap where no single model can sell well enough to justify an Apple level of quality even if they had the design sense to produce one.
Close, but Apple's real secret is that they make premium products, with profit margins that allow them to invest in researching the next big thing and spend time doing it right.
Dell, HP, Sony, Acer, Toshiba, Lenovo: these guys live and die by razor-thin margins of the "we'll make up for it in volume" strategy.
Ultimately I think a (un)healthy share if consumers just don't care about the details of the experience enough to spend more money.
Android software; why? Because it's written in a language I loathe
Okay, you despise Java -- and that's fine -- but you prefer a product that can only be programmed in Objective-C? Schizophrenia much?
I've written for both platforms. Cocoa Touch with XCode is certainly nicer than Android but Android's language is slightly better than the choices available for iP[ad|hone]. Either of them would be nicer with Python, Ruby, Lisp, or even JavaScript.
I have the same opinion and I don't see anything weird about it.
They're both OO languages without many features compared to C++ or something, but Obj-C is dynamic and much more flexible (see NSProxy, NSArray class clusters, etc.) whereas Java is not at all.
Plus Obj-C is actually C, so you can write "if (!i)" without the compiler deciding you're too stupid to handle implicit type conversions.
By the way, I've noticed that people (who don't seem to use it often) complain about the extremely long method names in Obj-C, but if you count by the number of tokens instead it can be pretty terse. And I think that's how reading natural language is supposed to work, anyway…
That's one of Cocoa (Touch)'s strengths, actually, and part of the 'secret sauce': Ozer and crew have developed a really good set of naming conventions over the years, and use them well.
I like the long names. It's very Smalltalk-like, and Xcode completion mostly takes the pain out of it.
Yes, because Obj-C is truly just Smalltalk with C-style control structures. Maybe I'm damaged, because I've used Smalltalk in a previous existence, and loved it.
For what definition of "dynamic language" is that statement true? Both languages do late binding of methods to objects, Java merely happens to smear on a layer of typesafety above it.
Frankly, the two languages are semantically all but identical, modulo the ability of Obj-C to fall back to a "bare memory" data model when you want to.
They certainly don't bind methods to objects at the same time - Obj-C does it when you send the message. You can send and catch messages with no methods attached to them, or that weren't even declared in the first place (at the expense of a compiler warning).
Of course there are other differences, but the most important one is just that the class library is much better - UI doesn't rely on inheritance, there's no class named LinkedHashMap (see http://ridiculousfish.com/blog/archives/2005/12/23/array/), and there _is_ a class named NSMutableArray.
You can do mixins fairly easily in ObjC, but in Java, you'd essentially have to resort to runtime bytecode rewriting. Take a look at what popular libraries like Hibernate, Spring, AspectJ use under the covers to perform their magic.
And that is that there is no serious competition for the hard combination of serious good taste, obsession with details that seem negligible but ultimately matter a lot, etc., embodied in Apple's recent products.
I know this sounds like raving Apple fanboy stuff, but I do think it's rarely acknowledged explicitly; it's either assumed by those in the know, or ignored by those not.
For example, I would never in a million years want to develop Android software; why? Because it's written in a language I loathe, and the API is workmanlike but completely uninspired (disclaimer: I've never written any code for it). In contrast, the Cocoa Touch APIs are delightful, and MIT-style (vs. New Jersey) at most levels. Perfect? No. Amazingly crafted? Yes.
I claim the secret sauce to their APIs is actually a character at Apple named Ali Ozer who, along with his crew, has maintained a firm hand on the tiller, and learned a huge amount about object-oriented design for interactive systems over the past couple of decades.
iPhone OS is the result of throwing out a lot of the cruft and starting over, but keeping the hard-earned knowledge. (Core Animation being one of the key underlying technologies for iPhone OS that found its way back to the desktop, but only partially due to all the cruft in normal desktop Cocoa.)
And Apple works hard at the silicon-to-UI whole-stack integration and performance tuning, something that other vendors famously haven't been able to do (because they don't control the whole stack), or haven't wanted to do. Microsoft has recently figured this out, according to Ballmer, and intends to do the same.
I could go on, but I'm sure I'll get downmodded to death already. ;-)