This is just a side-effect of the blub paradox disease. One of the symptoms for those that are afflicted with it is ranty blog posts about new languages that the afflicted person does not perceive as being better because it doesn't really have the same semantics as the one language they are used to.
I think it was Douglas Crockford that said this. We don't get new and better technology because the old people accept it. We get new and better technology because the people using the old crap just die out.
If you had followed what Owens has written and done in Swift, you know that he has worked and pondered the language quite a bit. Dismissing it as due to "blub" is disingenuous.
Aside from brushing away the value of his opinions, such a statement also implies that Swift is in some way sufficiently removed from mainstream languages as to feel "foreign". That is not true. Swift is a pragmatic amalgam of features from many modern languages.
The issue here is that many of the ideas are not fully realized and stunted to work within the constraints for the language (such as seamless C/ObjC interop). This is what is causing generics to only be halfway there (and same with several other features). In generics, Swift aims for strictly typed, reified generics, but then leaves you only with very crude tools to create them with. Generics in Swift is an exercise in frustration. Owens' feelings for generics might have changed if the support had been more complete.
When it comes to programming language reviews I don't think there is any value in them regardless of who is doing the review. Steve Yegge in one of his epic posts explains why everyone should design a language from scratch to see what the process is like. Once you do that the magic and wonder disappears and you realize both how hard and easy it can be depending on what constraints you have. Here's a nice starting point http://nathansuniversity.com/.
You're the one who evoked "blub", essentially insinuating that the reason for his inability to appreciate the language was due to him not being used to it.
I addressed that, and your counter with that it's "hard to make a programming language"?
Of course it's hard. And the problem with Swift is that it tried to do so many things at once, even with the hard constraints it had. And this is precisely why the language ends up short of its goal.
Speaking of ObjC 3.0, the point of the author is simply that an incremental improvement on ObjC which simply removes redundant syntax from ObjC would have been just as useful but much easier to get up and running.
The features that Swift tries to incorporate are admirable and interesting for the most part, but the implementation falls short because it's quite simply too ambitious to even reach "usable" until 3.0 or something.
You're putting words in my mouth. I did not say "it is hard to make programming languages". It is a skill like any other that can be improved with sustained practice. My point is most of these conversations about the merits/demerits of languages are less than useless. Here's a programming language checklist http://colinm.org/language_checklist.html and another one about the history of programming languages http://james-iry.blogspot.com/2009/05/brief-incomplete-and-m.... This post was just a variation of the checklist which indicates to me the author is not really interested in any kind of deep analysis of the language and is just publicly bitching and moaning. That to me is an indicator of the blub disease.
Regarding "blub" I already told you that this is unlikely given the amount of dialog and work Owens has done in the language so far.
He is one of the few people who has written in depth about the language (not tutorials)
If you didn't like my attempt to make sense of your non-sequitor about programming languages, you should perhaps reconsider your approach to commenting / writing.
Just saying that he is pulling opinions out of the blue ("not really interested in any kind of deep analysis") is blatantly false, as his other articles IN THAT BLOG amply demonstrates.
>Regarding "blub" I already told you that this is unlikely given the amount of dialog and work Owens has done in the language so far. He is one of the few people who has written in depth about the language (not tutorials)
Well, if said post is anything to go by, that's not saying much...
An uniformed post with several questionable arguments is totally fine to judge someone's understanding of a language.
And if it's the "last chapter" of his posts (that is, something he wrote after several previous posts exploring the language), it's even better to see if his opinions are "worth reading". In the sense that a first post with his initial impressions of the language would be more excusable not to be that good.
Plus, reading the "last chapters of books to see if they're worth reading" sounds a perfectly OK way to judge something like a technical book. If the last chapters are crap why would the previous be any better?
If you weren't talking about technical books, then the analogy doesn't applu. Tech posts are not some linear narrative like a book, where you don't read the last chapters because you'll might get some spoiler. In fact it's common to skip the first introductory chapters in tech books, since they are mostly intended for beginners.
Well, I told you that the blog forms a narrative, of which the entry forms a part. His reasons for reaching the conclusions in this entry is based on earlier investigations, which are detailed in previous blog entries. The accusation that this entry is lacking context is similar to claiming the same from reading the end of a book, in that of course it won't make any sense unless you actually have read the parts that form the context.
The his experiments with the language and basis for his statements is investigated in detail in the previous entries. When I point that out, you claim to be unwilling to read them because the last entry did not make sense. How is this NOT like having dismissed the last chapter as not making sense as a stand-alone story and refusing to read the rest?
The text we're discussing on the other hand is to be understood as an commentary on the language after using for quite a while.
To me (and just to put this in perspective, I've written well in excess of 10k LOC of Swift during the summer) his issues make perfect sense - when seeing them with a somewhat experienced eye. For example, issues with Swift generics isn't immediately apparent. It's only after using them for a while that you can say that the missing features ARE indeed lacking for everyday usage, and this is not just a theoretical problem.
Similarly, the problems with Optional isn't really obvious from the beginning. (And optionals seemed like such a win initially. Built in Optionals! The language built to support it everywhere. Seamless interop with ObjC. Safe unwrapping! Syntax sugar for flatmap etc etc. And then it ended up being just as much a burden as it was a help)
This would suggest that the very reason that you see this as "an uniformed post with several questionable arguments" is actually because you have very little experience in the language. Consequently you see what you believe are meaningless or "questionable" arguments, simply because you believe someone wrote the blog article with similar [limited] experience with the language.
Since his issues aren't obvious at a glance, you conclude that they are false, never entertaining the idea that they represent a much deeper understanding of the language than you have achieved.
I don't see how. Being a blub language has nothing to do with ease of use for newcomers and in fact no language can be a blub language. It is more a reflection of the user. If all I know is imperative programming constructs because I've been writing C for 20 years and I start to learn Prolog but fail to understand what constraint logic programming is really about because I'm constantly trying to use imperative constructs in Prolog then I'm afflicted with the blub disease. Neither C nor Prolog is a blub language.
That doesn't suggest Swift is a Blub-like language to me.
Though not explicitly stated in the Blub paradox, I'd say it implies that a fresh programmer, unfamiliar with both Blub and the more advanced language, would pick the more advanced language.
The Blub paradox doesn't automatically apply to every language whose syntax you find bizarre. It also has to be a more advanced language, and the reason that you don't like the language's syntax and constructs must be because you don't understand the advanced features they enable.
This is clearly not the case with Obj-C and the post you mentioned.
How is that a counterexample? Bizarre syntax alone is not enough for the Blub paradox. I don't know anyone who seriously argues that Obj-C is a more advanced language than Java/C++, so Blub doesn't apply.
Objective-C object-orientation via dynamic messaging is much more advanced/powerful than the Abstract Data Types available in C++ and Java. It enables such features as target/action, NSUndoManager, Higher Order Messaging, distributed objects...and their concise implementation.
If you don't understand dynamic messaging and see Objective-C as just a way of doing things that you would do in Java or C++, then I'd agree that it is less advanced at doing those things.
The consensus is that Objective-C's brand of OO is not particularly advanced, and Objective-C as a language is definitely not considered advanced. Keep in mind Paul Graham's essay had Lisp in mind, not a C derivative.
Nevertheless, the post that sparked this thread was a comparison between Swift with Obj-C. Now, even if you consider Obj-C an "advanced language" (which most people do not), it's completely unreasonable to think that a programmer looking at both Swift and Obj-C would disregard the latter because of the Blub paradox. It would be highly... let's say nonstandard to consider Obj-C more advanced than Swift.
On the contrary, it is well known that ObjC and related languages represent a different strain of OO than Java or C++. In fact, people has gone so far as to say that C++/Java don't really represent "real" OO at all.
It should be obvious that ObjC belongs to the Smalltalk linage, which is quite different from C++/Java. Unless you have understood why Smalltalk is still held in high regard, you haven't understood the language.
> I think it was Douglas Crockford that said this. We don't get new and better technology because the old people accept it. We get new and better technology because the people using the old crap just die out.
A bit ironic considering what a stick-in-the-mud he is.
I really like Swift, the strong static typing, imutability options, and even generics but there are some features of Obj-C that are lost. Objective C is a very flexible dynamic language with opportunities for runtime twizzling, object introspection. It is really like a lower level Ruby and if you use it a certain way is basically duck typed (you can send any message to any object). Swift is much more a static language and only really enables introspection and downcasting if you enable the Objective C compatibility for a class. This makes some things harder at least to do the same way as Objective-C (JSON parsing with NSJSONSerialization isn't pretty).
As mentioned I prefer Swift but I do acknowledge that there are some losses that others may feel more keenly.
His argument seemed to be "look, you could have added this feature to existing Objective C syntax rather than inventing a new language" - which is all true, but rather misses the point.
At least one of the problems with Objective C is its syntax, and he states this himself - "Much of the Objective-C syntax is clunky, bolted on, and downright infuriating at times". Yet nothing I see in his suggestions solves that - unless he's suggesting ditching all existing syntax (which is pretty much what Swift is), then all he's doing is proposing yet another syntax structure to add to the already messy bastardisation of C and Smalltalk that makes Objective C the pain it is today.
Except for the "therefore Obj-C is superior" part, I agree that Option types in languages such as Swift and Scala are unfortunately marred by those languages' need to be compatible with Obj-C and Java, respectively.
I understand idiomatic Scala won't use null, just as I assume idiomatic Swift won't use nil, but nevertheless the fact that they allow nulls weakens their type systems.
Can you explain more about what you mean by Swift allowing nil weakening it's type system? Swift types don't allow nil but you can use a different type that is the optional counterpart. This is just the same as using the Maybe monad in Haskell and can be reimplemented in that way. Do you think that this also weakens Haskell's type system?
Now if your issue was with the "!" Operator I would say that there is a case to be made but that should be used sparingly and carefully and is an obvious place for additional code review.
Hmm, maybe Swift's Optional works differently to Scala's Option, in which case my objection can be dismissed.
In Scala, you can declare a value to be of type Option[T], which means it will be either a Some[T] or a None. This looks superficially like Haskell's Maybe monad.
val x: Option[Int] = Some(1)
The problem with Scala is that you can also write something like this:
val x: Option[Int] = null
By declaring a value to be of type Option[T], you "promise" it won't be null, but this is not enforced by the compiler. It's obvious this is a huge difference from Haskell's Maybe, and it weakens Scala. Null shouldn't even be part of Scala, but it's still there to retain compatibility with Java.
Can a similar example be written with Swift? I assumed it was possible, but maybe I'm mistaken.
In Swift nil is the value equivalent to Haskell's None so there is no distinction there which I think you are saying that there is in Scala.
Swift does have "implicitly unwrapped optionals" which you declare with "!":
var x:Int! = nil
rather than normal optional syntax of
var x:Int? = nil
It does weaken the type system and I would only make very limited use of it. I would only ever use it for properties that will not be nil after the initialiser has finished. Even though there is a way around the type system to do dangerous things I think it is still far better to have the type system there and to only work round it when absolutely necessary than to default to something much weaker. As I've mentioned elsewhere "!" is a code smell and I would use it rarely and audit those places heavily.
That's one of the benefits of Rust imho - no legacy baggage. One of its downsides as well - no ecosystem to draw from. Oh well, take the good with the bad I guess.
"I look at the feature set of Swift, and I have to ask myself the question: what’s the point? What is really trying to be solved? And does it provide significant benefits over languages that already exist?"
One reason may be that large companies want ownership of a modern, C#/Java/Go-like language [1]. I was an intern at Adobe when it was trying to develop such a language (ActionScript 4) + VM, and a primary reason not to adopt an existing language is that they wanted control and to be free of legacy baggage as much as possible. Obj-C is reasonable to use today, but its C and Smalltalk underpinnings sometimes feel like anachronisms for people writing App Store apps. This is probably especially true if you're a language designer like Chris Lattner at Apple and are tasked with fixing the most prominent pain points that your language users face.
[1] I realize these are somewhat diverse languages.
Surely a great thing about Swift is that the compiler knows the types of $0 and $1 and will prevent you from doing stupid things with them like you can in Obj-C?
Yes, and having lambdas and unnamed parameters is a great thing when the local context is enough to explain it.
People are totally okay working with unnamed numbers and values, and it is as important with functions when you program in functional way.
We write
x2 = x1 + 10
if it makes sense in the context. When it needs a better explanation we name the value
width = 10
...
x2 = x1 + width
If we would need to name EVERY number, the code would be less readable. Same with the functions, sometimes lambda with unnamed parameters (e.g. $0 < $1) is more readable than naming the lambda and all the parameters. Each name adds a conceptual burden.
Last night was one of the few times I touched ObjC in months, since I was starting a project from scratch, it took me a while to recall the precise details of @property syntax.
He's right to say that swift is mostly syntactic sugar. However, as he points out, all we really needed was ObjC with a better syntax.
Also, the debugging tools suck right now. My biggest beef with the language is not removing the return keyword. I was really hoping for a language with that was more OCaml, less C++.
In summary, is it perfect? no. is it an improvement of ObjC? Definitely. are there areas which ObjC shines? a few.
Great post. I'm not sure if it's because of my Objective-C bias or not but Swift just seems messy to me. Everything from how it's spaced and structured in the editor to how it's read. (Lack of headers, clear separations between data structures, methods..etc). Say what you want about Obj-C but it was damn organized.
My instinct is that most well-versed Objective C programmers won't get much out of Swift. But for people like - non-Objective C programmers - it's been great. I've long since been put off by the alien syntax of it. I've been told numerous times that once you get to know it you adjust, and I believe it, but iOS programming is never going to be my full-time job, so I'm not interested in spending that amount of time getting comfortable.
With Swift, however, I was up and running very quickly. Also running in EXC_BAD_ACCESS errors, but hey, early days...
As he predicted, I was with him until his rant about generics. While the example he makes support his point, that's nothing specific to generics but instead to the implementation of generics. For example, he uses the following example as "bad" generics:
func reverse<C : CollectionType where C.Index : BidirectionalIndexType>(source: C) -> [C.Generator.Element]
in which `CollectionType` is parameterized by the type variable `a` [0].
I also take issue with the idea that removing static checks isn't a big penalty. In particular, I cringe a little at the following sentence
> Because it does not actually matter. If an Int gets in my array, it’s because I screwed up and likely had very poor testing around the scenario to begin with.
The benefits of static typing is that you don't need testing of things like that. The compiler guarantees safety, allowing you to avoid writing test case that are mundane and boring, such as checking that you don't put an Int into a String array.
The following paragraph also seemed questionable to me:
> Yes, in this example, I’ve moved the validation from compile-time to runtime. But you know what, that’s likely where many of these types of errors are going to be coming from to begin with because the content of the array is being filled in with dynamic content getting coerced into your given type at runtime from dynamic input sources, not from a set of code statements appending items to your arrays.
I think this is somewhat incorrect. You should never just be type-casting your inputs. (In fact, I think it should ideally be impossible to do so without the compiler generating really big flashing warnings saying "THIS IS DANGEROUS!"). The static verification here will prevent you from doing silly things, and should ideally force you to do input validation at the location of input, instead of blindly casting things to the type it needs.
All in all, not a bad discussion, but I think that this piece demonstrates that bad implementations of static typing can severely detract from the good qualities of static typing, and that it takes some getting used to to program well in a statically typed language (not casting things spuriously is a good example of that). That said, I know very little about Swift, so take all of this with a grain of salt.
[0] It may at this point be clear that the inspiration here is Haskell and ML; I am a big proponent of these languages, and believe that static typing can eliminate many common errors.
To me, by far the biggest advantage of generics is that it makes it a whole lot easier to understand function signatures. I develop on both Android and iOS and I can't even begin to count the number of times I've read a function definition, seen that it takes an (NSArray *) as a parameter and had to trudge through 5 layers of indirection(either upwards to find the construction, or downwards to find the use) to figure out what that array is supposed to be filled with. Contrast that with Java on Android, if a function takes a List I can immediately tell what type of list it's expecting. It's true that if the code had proper tests and a good architecture this shouldn't be an issue, but when was that ever the case? You just get handed a git repo your predecessor left and are expected to read the code and understand it.
>> Because it does not actually matter. If an Int gets in my array, it’s because I screwed up and likely had very poor testing around the scenario to begin with.
>The benefits of static typing is that you don't need testing of things like that. The compiler guarantees safety, allowing you to avoid writing test case that are mundane and boring, such as checking that you don't put an Int into a String array.
This is a canard. You hardly ever (within an epsilon of "never") write tests for types. You write tests for values that you expect, which the type-system doesn't guarantee. Those values have types, so those get tested as a side effect without any additional effort.
What about when you are handling external data? Don't you test what will happen when unexpected types are received? How do you test the handling when a JSON key you were expecting to be an array turns out to be a string or an integer or vice versa?
Do you handle nil's correctly in every place they might occur. These are all things that can be covered by a powerful type system (such as Swift's unless you overuse "!"[0]).
[0] Anything other than use for a member that is initialised during the init method is a code smell in my view.
Well, types are essentially sets of values. So if you can capture the values that you expect in a type, then the type system _does_ give you a guarantee.
OK, I see your point. Of course, the type system will not substitute this kind of tests. But it allows you to check the whole range of values. This is helpful if you e.g. wanted to handle overflows, wanted to limit the operands to positive numbers, etc.
Not only will the type-system not substitute these types of tests, but also the reverse: you just don't write tests for types, as they are subsumed by the value tests, which is why I object to the canard of "the type system saves you from having to write trivial tests for types".
And yes, a type-system can do certain types of "forall" analyses that are difficult or impossible with tests, but that's a different topic.
Generally with tests, you're less concerned about having some particular examples succeed, than you are with finding if there are possible values that fail.
I know that's just an example and that you probably don't actually write tests like that in your day job, but it's a terrible unit test. For example, it fails to tell apart addition from the constant function 7.
Even if you add more data points, you're still not testing addition.
In a way, "testing for specific values" is very misleading. Consider that it'd be the wrong if you were actually trying to prove something. Now, tests aren't proofs, but we still should write them to be as strong as it's practical.
Strong static typing with property testing (a la quickcheck) whenever possible are preferrable in my opinion.
> it fails to tell apart addition from the constant function 7
It's not supposed to do that.
Have you heard of TDD? In a TDD/XP setting, the constant function 7 would be the appropriate implementation for making that test pass, because it is the simplest thing that could possibly work. Then you add another test, let's say add(40,2) EXPECT(42).
Now you could extend your add() function to do case analysis, and maybe in a first step you even do that. But then you refactor towards better code, and you replace the case analysis with actual addition.
For addition the steps are, of course, a bit contrived, because it is "obvious" what is supposed to happen. For production code the technique works really well in keeping the solution as simple as possible but no simpler. You probably wouldn't believe all the "obviously needed" code I haven't written because of it.
Another interesting benefit is that it splits coding into two distinct activities: (1) making the test pass as stupidly as you can and only then (2) making the code good by refactoring an existing solution.
Doing a good job of (2) is much, much easier when you are transforming a solution known to work and safeguarded by tests.
Also having just two test to induce addition may seem a bit sparse, and it is(!), but in my practical experience with TDD, I have been utterly surprised by how few concrete examples have been sufficient to safely cover potentially large spaces. Much fewer than I would have suspected or believed possible.
I understand that this kind of testing often works (though I'm less enthused with TDD as a design technique, which is unfortunately what TDD proponents emphasize, and what you seem to be describing here). I'm saying it is too limited. "But software is written and tested this way", you can say. But buggy software is written every day, even with testing, which is why we should strive to improve our testing tools & processes.
To be clear I'm not saying "abandon TDD" (ok, maybe I'm not rooting for TDD in particular, but I'm definitely not saying "abandon unit testing"). What I am saying is "complement it with static typing and more advanced testing techniques, such as property testing."
Finally:
>> it fails to tell apart addition from the constant function 7
> It's not supposed to do that.
Well, in a way it is. Your tests must attempt to detect flawed implementations, even though they cannot prove correctness. I'm sure you can think of cases where an algorithm sometimes seems to return the correct result, but fails in some cases.
Well, you're obviously not familiar with TDD, just with silly straw man arguments against it. No, the 4+3=7 test is not supposed force creation of actual addition, the additional tests + design principles do.
And I am not saying "but software is written and tested this way". Software is written and (often not) tested in many ways. I am saying that in my practical experience software that is written this way is both much simpler and much more robust than people not familiar with these techniques such as yourselves imagine. Or maybe can imagine.
As to architecture, I strongly recommend Henrik Gegenryd's PhD Thesis: "How Designers Work"[1].
By the way, please don't confuse "easy" with "simple" like the second blog post you reference.
Here are some interesting comments from the above:
- Bloch: tests are inadequate as documentation.
- Norvig: I like TDD but it's inadequate to discover unknown algorithms.
(The infamous Sudoku Solver debacle is a particularly painful example of Norvig's claim, and in particular I think Ron Jeffries' attempt at doing TDD was embarrassing. Like the blog says, if I were a TDD proponent, "I'd be pretty strongly tempted to throw Jeffries under the bus")
Very few if any of the people mentioned outright dismiss TDD, but they do point out its limitations, which to me are mostly about TDD as a design process.
If we go back to TDD as testing, my initial objections apply: it's useful, but it's not enough. More advanced and formal tools, and static typing, are of great help here.
Even if you disagree with everything else, you must at least agree with this: computers are about automation. Automating as much as we can, including testing, is a good thing. Writing tests is itself something that can -- in some areas -- be automated, in which case it should be preferred over hand-writing those tests.
> For example, it fails to tell apart addition from the constant function 7.
Right. But how would you test for the addition function without exhaustive O(N^2) search over the whole input space? (You need to test for commutativity; if you're going to test for associativity, it grows to O(N^3)).
My comment was in the context of the "unit tests vs type systems" debate. It was meant to illustrate my opinion that "more general" is better than "specific values" when testing, as much as is practically possible.
I'm aware that addition is a toy example, but suppose we want to test our implementation:
Except for very simple verification, to exclude obviously broken implementations, I'd rule out testing specific values such as 3+4=7. And, like you said, performing an exhaustive exploration of all values is out of the question.
So I'd try property testing instead. Relevant properties in this case are associativity, commutativity, etc.
As an example, I'd try writing properties such as:
for all X, Y: add(X, Y) = add(Y, X)
I'm not arguing that everything can be tested like this, or that the properties are always easy to formulate; but when they are, I think this is the superior approach.
The tools that I'm aware of that can do this (such as QuickCheck or ScalaCheck) come from statically typed languages, though I don't see why they couldn't be used with dynamically typed languages.
The point is that this starts to look a lot closer to static typing and "testing generalities", philosophically, than what proponents of dynamic typing + unit testing propose. Testing generalities is better than testing specifics, because it's at least a step closer to a proof of correctness.
> The tools that I'm aware of that can do this (such as QuickCheck or ScalaCheck) come from statically typed languages, though I don't see why they couldn't be used with dynamically typed languages.
The tools use type information to determine the universe to draw test values from and the mechanism used to do it. You can actually do something very similar for dynamically typed languages, but if you don't have queryable type annotations for parameters, etc., you have to have more verbose test specifications that provide the scope of testing.
E.g., in a statically-typed language (or a dynamically-typed one with optional type annotations), if your add function is defined as something equivalent to:
double add(double x, double y)
{
...
}
then your test system can use that information and a value generator function for doubles to generate the appropriate test data to validate the property.
OTOH, in a dynamically typed langage where you just have something like
def add(x, y)
{
...
}
Without some additional specification, the test framework doesn't know how to generate the X and Y values for the test you propose.
There are degrees of static typing. Swift tries to get close to Haskell, but without the full set of tools to do so without losing things along the way. This is sort of the worst of both worlds.
In the meantime Strongtalk already demonstrated that you can write a fairly rich optional type system to reap the compile time benefits of static typing while retaining the runtime power of message passing. I think that would have been more fruitful.
Yes, there are degrees of static typing (hence, some people unfortunately disregard great type systems because they are only familiar with Java's cumbersome one).
The thing is, as soon as you realize that unit testing (of the kind proposed by 3+4=7) alone is not enough, or even adequate, and that you must use more advanced tools, and that more advanced tools can and should be automated whenever possible... you will probably start considering the possibility that static typing was a good idea after all. Especially if you were considering using type annotations in your dynamically typed language because you needed to extra info in order to use more advanced testing tools.
Static typing is an incredibly useful automated verification that the compiler performs for you, freeing you to write more interesting tests. It won't catch all mistakes, but no-one is arguing that any single technique will. The argument is about the relative merits of "unit testing alone is enough" vs "testing + automated testing with type checking is way better".
Since we're talking about Swift generics here, you have to realize that the point is not whether generics / static typing can be useful, but if the type of generics and static typing as enforced by Swift is good enough.
The problem here is that Swift has incomplete generics. In several aspects they are significantly worse than even Java's(!)
Together with its brand static typing and type inference that doesn't always work, you are really working against the compiler to get things to work. Not because you get the types wrong, but because you have to work around the incompleteness of the generics implementation!
Swift is not Haskell and should not be mistaken for it. The problem with Swift's generics is one of incompleteness, inconsistencies and problematic trade-offs to maintain ObjC compatibility.
Why do you say it is the worst of all worlds? To me it is a very good compromise and about as good a situation as you can get while maintaining C/Objective-C compatibility. Don't forget there are performance benefits to being less dynamic (not using runtime message passing) and you can opt in to that runtime mode if you want by making classes @objc.
Well, in practice the broken generics stop large amounts of reasonable uses of generics, the lack of co/contravariance force you into hideous workarounds everywhere.
As for the promised runtime performance improvements they have yet to surface and worse: the language is still plagued by extremely uneven performance, even for optimized builds. I could go on at great length citing issues that will be hard to fix within the next year or so. Unfortunately. This is what my fairly hard won (15k loc of Swift code) experience with the language has revealed so far.
I haven't done that volume but I have written a few thousand lines including this branch of a GCD wrapper library which uses generics to allow return values to be passed between the closures running on the different threads in a typesafe way: https://github.com/josephlord/Async.legacy/blob/argumentsAnd...
It was a nightmare getting it all building right (generic methods on generic classes took me a bit of time to sort out) but I think it works well now.
As for performance noting the amount of Swift you've done (and your activity on the DevForums) I expect you know all this but I've done quite a bit on speeding someone else's code up. There are certainly plenty of ways to accidentally slow down code:
I regard Swift's inconsistent runtime performance and over-reliance on compiler optimizations as a major problem with the current state of the language.
It does not help that some slowdowns are due to bugs, and some due to questionable implementation details that require knowledge of how the runtime operates.
Of course, knowing about the runtime is necessary for all micro-optimization regardless of language, but Swift currently requires it almost all of the time.
The difference between Swift and ObjC/C is glaringly obvious.
Your GCD wrapper examples use only the simplest form of generics, with no constraints at all. Magnify the problems you had to get this to work by x10.000 and you get how easy it is to do anything complex with Swift's generics. It's a maze of missing features and bugs.
I think it's very understandable that Owens got fed up with it.
There are actually many poor claims you made in your posts about "good tests".
> I'm aware that addition is a toy example, but suppose we want to test our implementation:
> Except for very simple verification, to exclude obviously broken implementations, I'd rule out testing specific values such as 3+4=7. And, like you said, performing an exhaustive exploration of all values is out of the question.
> So I'd try property testing instead. Relevant properties in this case are associativity, commutativity, etc.
> As an example, I'd try writing properties such as:
> for all X, Y: add(X, Y) = add(Y, X)
These properties can also be satisfied by implementations of add() that:
- return a constant value
- return the smallest number of (x, y)
- return the largest number of (x, y)
The cases you threw out as an "obviously broken implementation" are required to actually validate that functionality of the method. The functionality of the method is also one of the properties of the method.
You can write it in a more generic way than simply: assert(7, add(3, 4)). However, those tests are _also_ required. Without them, you never actually test that the `add()` function does what it's supposed to: add two numbers together.
Regardless of type system, you also have to worry about underflows and overflows - another property of the functionality of the method.
> You are right, without additional information property testing would be less useful. Which is yet another reason to favor static typing in my opinion.
Static typing doesn't help you constrain sets of inputs; it may not be valid that your method accepts all ranges of integers. You could have a method `addbase2(int x, int y)` that is to be used only when x and y are powers of two because of an optimization you perform in that method. Static typing doesn't help you generate the correct input set for x and y.
The only thing that static typing provides, in regards to test cases, is this:
def add(x, y)
assert x is int
assert y in int
return x + y
// test cases
assertIsThrown(add("foo", "bar"))
That was the test case you had.
Regardless, the point of the article was not about static typing being bad. There is value it. However, there is also value in not being so rigid in your type system that things don't work well.
add((short)0, (long)1) // compiler error if you have an extremely rigid type system
Generic systems typically swing the pendulum far to the right requiring an extremely rigid type system. That always causes pain. The question you have to ask, is the ROI worth it. For some, it is. For others, it's not.
Note I never claimed unit testing should be disregarded (I practice it and recognize its benefits), or that static types catch all errors, or that add(x,y) was anything but a toy example.
Please note I didn't throw out add(3,4)==7, but instead pointed out it's terribly inadequate as a test. Additional testing tools must be employed; unit testing alone of this kind is not enough.
With property testing you're still not proving correctness. Tests cannot prove correctness. But it's a step in the right direction. Sure, maybe you have a function "add" that is associative, commutative, and has a neutral element, and it's still not integer addition. I'd argue your confidence in such a function will be a lot higher than if you had simply unit tested a few border cases. You can still do that in addition to property testing, anyway.
> The only thing that static typing provides, in regards to test cases, is this [example]
This assertion is wrong. Static typing done well provides a lot of things "for free", such as restricting incorrect behavior. For example, if you write generic methods you can rule out entire classes of misbehavior. It's not that you "type check" that you are not using something that is not an int (as in your example), but that you simply forbid entire groups of operations at compile time!
Here's another toy example to illustrate the point: what values can a function with the following signature return?
f :: [a] -> a
(For the purposes of this question, you can read that as "a function that takes a list of type a and returns a value of type a).
Now, repeat the exercise with a dynamically typed language. What values can the following function return? (If you want, for the purposes of this question, assume it returns an atomic value and not a collection).
dynamic_f(a_list)
This has an obvious implication on the effort you must make when testing either function.
> Please note I didn't throw out add(3,4)==7, but instead pointed out it's terribly inadequate as a test. Additional testing tools must be employed; unit testing alone of this kind is not enough.
I don't follow what you are saying here at all... It's an API test with no external integration points, what other kind of tests besides unit tests would you have? Your `add(x, y) = add(y, x)` are still unit tests.
Also, you said that you would "rule out testing specific values such as 3+4=7". You have to test specific values; the contract of the function is:
f(x, y) = z
Where z is the mathematical sum of x and y.
Specific value testing is the only way you can verify that claim for a given set of inputs.
>> The only thing that static typing provides, in regards to test cases, is this [example]
> This assertion is wrong. Static typing done well provides a lot of things "for free", such as restricting incorrect behavior.
This was taken out of context; this was in reference your to "generator" of test values for X and Y. The type signature alone is inadequate to generate test values.
Regardless,
f :: [a] -> a
Is no harder to test in a dynamic language.
let r = f([...])
assert(r, correct_value)
assert(r.type, correct_type)
It's up to the contract of the function to determine what, if any, validation needs to be done on the input. This is true regardless of type system. The only question is this: do you also check the type.
Again, type is only _one_ of the constraints that get applied to parameters. In the add function example, the other constraints are:
1. x <= TYPE_MAX (for free in a statically typed system)
2. x <= TYPE_MAX - y
3. y <= TYPE_MAX (for free in a statically typed system)
4. y <= TYPE_MAX - x
Plus the similar for TYPE_MIN. In the `addbase2` example, additional constraints are:
1. x power of 2
2. y power of 2
In this example, we still need to add verification for two-thirds of the constraints.
On the flip-side, with generics, especially with the type of generics we see in Swift (using the `f :: [a] -> a` example), you'll probably need to model constraints of the collection, the element type of the collection, and the type of indexer that is being used if you wish to make your function actually work.
And then your implementation only works for those that rigidly adhere to the type conformance, where as the dynamic one can work for any type that conforms to the protocol, whether loosely or explicitly.
This flexibility is very powerful, is not hard to code safely around, and requires significant less code gymnastics before you can even get your code compiling.
I think you are underestimating the power of statically typed generics when using a language with a decent type system.
In your example:
let r = f([...])
assert(r, correct_value)
assert(r.type, correct_type)
This doesn't test everything we need to know. For example, the following function passes your asserts (in pseudocode):
f(a_list):
if (a_list instanceof List[Int]):
return 0
else
... other stuff ...
whereas the original, statically typed version of the function with signature
f :: [a] -> a
cannot ever do that. This is a profound insight. It cannot return zero "in the case of a list of ints". It doesn't know anything about its input if you don't tell it. And you shouldn't tell it, either, unless you have a very specific reason to do so.
Also, in programming language with decent static typing (that is, not Java or C++; I wouldn't know about Swift to comment), there is a huge additional difference between the two functions:
I can promise you my function doesn't write to disk, doesn't output to the screen, etc. You cannot promise the same with your function. Your function might work when it has access to the disk, as in your test environment, but fail on production where it does not. Ok, so you inspect the code to make sure your function (or any function it calls) don't do I/O. But I don't have to do this, because the type system tells me my function is side-effect free.
So now you have some pretty powerful assurances in favor of my statically typed function:
- It doesn't perform side effects. I don't know about the dynamic function.
- It doesn't produce any value out of thin air; it must work with the list I passed it, because it doesn't know anything else. It doesn't know how to create new values.
- As a consequence of the above, there are fewer possible implementations of my function than of yours, excluding no-ops.
This is a kind of testing "for free" that you don't have with dynamically typed languages. And it is pretty powerful.
Yes, you can cover a lot of cases with unit tests in a dynamic language, but why not let the computer do the boring work for you? It's what computers are there for. Focus on the interesting test cases instead.
> I can promise you my function doesn't write to disk, doesn't output to the screen, etc. You cannot promise the same with your function.
WHAT?!
Your type signatures have absolutely no assurances with regards to side effects. They cannot even make a claim that the function is thread safe, let alone that it doesn't write to disk our output to the screen.
I'm baffled at why you think that is true:
int foo(int bar):
// network call here
// write a log to disk here
// change a global value here
return happy_int
And you are woefully mistaken about about this claim as well: "it must work with the list I passed it, because it doesn't know anything else."
Many languages that actually have good generic type systems allow for type specialization. That means that I can provide different implementations for different types. So in the contrived example of you doing something completely different with my list of ints in the dynamic version is completely possible in many statically typed languages too.
> Your type signatures have absolutely no assurances with regards to side effects. [example]
This might be true for Swift (which I suspect it is), but it's not true in the wider ecosystem of statically typed languages. A language with a type system with allows controlling side effects, such as Haskell, does indeed make such assurances. In Haskell, for a function to make network calls or output to disk, it must live within the IO monad (which can be seen by its type!). I'm discussing Haskell here because I'm more familiar with it, but there are alternative mechanisms in other languages.
Compare:
f :: [a] -> a -- I promise you this function doesn't do any I/O
with
f :: [a] -> IO a -- this function may do I/O
This is a powerful assurance right there! Of course, Haskell programs as a whole must live in the IO monad (a program without any kind of I/O is useless). But you're encouraged to write as much as the program as you can as pure functions, which can then be tested (unit tested or whatever you prefer) with the very useful knowledge that they cannot do I/O.
Next, generics systems. Languages with OOP and generics, such as Scala and, I suspect, Swift, let you do all sorts of naughty things within generic functions.
But doing generic programming like in Haskell is way safer in this regard. No, you are not allowed specialize the type in f :: [a] -> a. Doing so would be unsafe.
So let's go back to my claim: the above function cannot do anything else but produce a value out of the list I passed it. It doesn't know how to produce something else out of thin air. It cannot "inspect" the value of its type; it has no unsafe "instanceOf" operator. It cannot even apply any operation to the values of the list (except of course list operations), since I didn't declare the generic type had any. This is a very powerful assurance that a dynamic language cannot make, and one that simplifies the tests I have to write.
Because of this property, you are encouraged to write, whenever possible, functions that are as generic as possible. Sometimes you can't, but then you'll specify as little as possible, such as:
sumAll :: Num a => [a] -> a -- "a" is a number with operations +, -, etc.
And then you'll have additional operations available for your type a, but not as many as if you were writing this with a dynamically typed language with no assurances at all!
Once you realize this, you'll start thinking of generic programming as a tool that constrains the kind of errors you can make (because you have less choices to make, so to speak). And this has a huge impact on testing!
> So in the contrived example of you doing something completely different with my list of ints in the dynamic version is completely possible in many statically typed languages too.
Of course, many statically typed languages are no better than dynamic languages in this regard. I was talking about decent type systems. Not sure where you'll place Swift, though.
Even with languages which allow instanceOf checks, such as Java and I'm willing to bet most OO languages, the practice is frowned upon and wouldn't pass a code review. Unless, of course, there was no other way to solve the problem, but this really would limit the usefulness of generic programming.
But your clean generic example is different from his. Your example specifies a CollectionType returning another CollectionType with elements of type a. His example specifies a CollectionType where the Index property is a BidirectionalIndexType returning an array of elements of the same type as the CollectionType's elements type. Im not sure about the last part because I dont really know Swift.
I'm just going to reply to one part with an example of the problem.
> I think this is somewhat incorrect. You should never just be type-casting your inputs. (In fact, I think it should ideally be impossible to do so without the compiler generating really big flashing warnings saying "THIS IS DANGEROUS!"). The static verification here will prevent you from doing silly things, and should ideally force you to do input validation at the location of input, instead of blindly casting things to the type it needs.
I did not say anything about type-casting inputs. I said coercing values into a given type. The naïve approach can be to type-cast, the other way is to write the code for the coercion process.
// some collection we are holding the values in for some reason
NSMutableArray *inputValues = [[NSMutableArray alloc] init];
NSString *someInputValue = // probably read from user input or a file
NSInteger value = [someInputValue integerValue];
BOOL validInput = YES;
if (value == 0) { // we need to check that there really is a value of 0...
NSString *trimmed = [someInputValue stringByTrimmingCharactersInSet:NSCharacterSet.whitespaceCharacterSet];
NSString *trimmed0 = [trimmed stringByTrimmingCharactersInSet:[NSCharacterSet characterSetWithCharactersInString:@"0"]];
if (trimmed0.length != 0) {
// oops, actually had an error... handle it
validInput = NO;
}
}
if (validInput) {
[inputValues addObject:@(value)];
}
Of all of the places where could have had errors along the way, the last `[inputValues addObject:@(value)];` doesn't really concern me that much.
Also, the compiler doesn't help me get things correct... the only thing it would have helped me do is make sure I put an integer into the array, not that I had the right values in the array. Had I not known that `integerValue` returns `0` in its error cases, I would have not known that I need to write some additional code to verify the string value was indeed a zero.
And generics only helps you when you have collections of identical types. Storing arrays of plist entries, for instance, requires you to revert back to `AnyObject` (or similar).
Generics can be helpful, if it's done well. However, even in .NET's generic system, with all the limitations and constraints it put in, there are many times where it still gets in the way.
I'm tired of fighting with my tools just to get the job done. Currently, Swift makes me fight a heck of a lot more then I want to or need to just to make the compiler happy. The end code I write is the same both ways, but the Swift code has a lot more annotations and is a lot less flexible.
In regards to optionals, I've personaly found myself using the `property?.member = value` syntax quite a bit, which lets you avoid the nil check in many cases. I don't think a nil optional should be an "error case" as the author puts it most of the time.
Overall, I've been finding Swift to be more coherent and faster to use than Obj-C in my projects, which is nice. But I'm not a language power user.
Why not just `map uppercase array` ? That way you don't even really need to define a function `toUpper array`. Isn't it customary not to mix lifting and program logic in functional programming languages?
Filter example:
notPrefixedWithA name := not contains (prefix name) ["a", "A"]
The author fails to realize one critical thing—that the imaginary "Objective-C 3" language he wants requires intentionally breaking the fact that Objective-C is a strict superset of C… exactly what Swift has done.
Look at Eero for to how a simple updated syntax could look. On top of that, many of Swift's features could run on an updated ObjC as well. The problem is that the features that required a whole new runtime and compiler model are very few.
In other words, we could have gotten almost-Swift by building squarely on ObjC and would have avoided most of the runtime and performance issues that still plague the language. Plus it would have been a simpler language than Swift is.
Which is fair enough, but if you look at the amount of effort they are putting in to bring Swift's documentation / tooling up to speed, it seems pretty obvious that Apple themselves have high ambitions for the language.
Also, it's going to be hard to tell how much Swift code is being shipped by Apple - the interfacing with existing Obj-C code is sufficiently clean that it's relatively simple to have parts of a product implemented in Obj-C and other parts done in Swift, and we, sitting on the outside, may never know.
I recently learned that core data wasn't used internally at apple. That freaked me out, because it made me feel that choosing that technology for a serious project was a mistake, despite the numerous documentation and support apple provides for it.
Unfortunately it's not a written statement ( you can imagine that nobody working at apple would publicly say a thing like that). And it's not a direct source either, so you can very well take that with a grain of salt.
Actually, I am still hoping that someone will contradict me by providing concrete example of apple software using core data though, because as i stated, i invested a lot in that technology for an important project.
I got a request to share my project (15k LOC) with the compiler team because they had no larger projects to test on. This was less than a month ago. Draw your own conclusions on how much Swift has been used internally at Apple. Oh, and compile times for any change in that project was 50-60 seconds. That's a minute for changing a single character in one of the Swift files.
Since they don't seem to be scrambling to fix that problem, I can only assume that there's no urgency internally at Apple to fix it.
I tried to hunt down the statement but without success. If I didn't imagine it, then I'm fairly sure it was in the Apple dev forums. However, I did find a posting where someone said they hadn't found the Swift runtime libs bundled with the WWDC app.
So, either the WWDC app had the Swift runtime statically compiled into the app, or it was written in ObjC. Checking the size of binary of the app should give a hint - the runtime libs you need to bundle are huge.
when I first saw Golang, I hated it, but now I think it's the best server side language. Galang is an insanely pragmatic language. It's not fancy, just works.
Facing Swift, I had mixed feelings. It's a beautiful language, probably as pretty as Ruby, but what really makes it unique, or extremely productive? I can't find any.
Am I the only one agrees with the author that generics probably do more harm than good. I'd argue that generic gives programmers a fake feeling of control and easily leads programmers spending time on over design.
Pointer ownership is the most important feature of Rust, and everything is built around it. The management of memory, auto-freeing of resources (a mutex unlocks simply when it goes out of scope). It also ensures that no piece of memory has more than one pointer which is mutable at any one time, allowing many optimisations that const in C++ does, but without actually having to annotate all your variables. You can also guarantee that a new thread shares no memory with another thread, preventing many types of race conditions. Goodbye to memory leaks, goodbye to buffer overflows, goodbye to use-after-free, goodbye to null pointer crashes.
Pointer ownership prevents memory bugs and concurrency bugs. When you have the combination of aliasing, mutability and concurrency is when you have all of these Heisenbugs. Rust attacks it from the angle of not having aliasing. You can have mutability and concurrency, but only one thread should have ownership of the thing being modified.
> Am I the only one agrees with the author that generics probably do more harm than good
It's really the same debate as 'Dynamic' vs 'Static' languages, since without generics your code is relying on runtime type checking in anything remotely complex, thus is effectively a dynamic language.
Personally, I'm of the opinion that people that don't like static typing only feel that way because they've only used the shitty implementations in Java or C#.
Regardless, This is a religious war that has raged for decades, and isn't likely to be answered anytime soon. The answer really comes down to "It Depends". I fall firmly in the 'static typing is good' camp, mainly because I have hard evidence to back up my opinion that it results in significantly fewer defects.
(Specifically, very clear reports from from issue tracking systems showing our defect rate in production dropping by 90% (!!!) when we switched from Groovy to Scala, with a notable increase in productivity).
Some of the developers complained, since they had to learn knew tools. But being professionals, they learned new tools and were better off for it.
Now while I'm an extremist religious zealot about proper static typing being the one true way, I'm quite mindful that, for many developers, that tasks they are working on just aren't complex enough for it to make much of a difference in practice - they are able to test all edge cases and deploy stable software to production, just with a little more runtime testing than they'd otherwise need.
Some languages - such as Go, or pre-enlightenment Java, do not implement Generics, and thus require runtime casting in many cases. In these languages, there's still a degree of compile time checking, just not as thorough as it should be. As with dynamic languages, they can work with no perceived issues for projects up to a certain size and provide a reasonable halfway point. Beyond this, you're going to hit a wall.
As to your argument that Generics do "More harm"? I'd strongly disagree. If you're unfamiliar with the gotchas generics introduce (i.e. Variance can be a mindfuck), then they can seem difficult and problematic. But like any other professional tool, once you've gotten over the learning curve you're more productive with it than without.
tl;dr If I wanted to bang a few pieces of wood together, I'd feel comfortable using a Hammer. the learning curve small, and I can connect those two pieces of wood in no time.
My Uncle is a carpenter. As a professional carpenter, he bangs pieces of wood together all day long, every day, for his entire career. As such, a nailgun is a more appropriate tool. While being more complex to use and having a steeper learning curve, he's a professional, and uses a professional tool to do a professional job. Occasionally he might want to bang some quick project together in his shed, and getting out the nailgun is overkill, so he uses a hammer for the odd thing here and there.
I'm a professional programmer. I use professional tools, even if they have a steeper learning curve and might be more complex. Occasionally I want to whip up a quick script, so will just hack it together in Python.
> our defect rate in production dropping by 90% (!!!) when we switched from Groovy to Scala, with a notable increase in productivity)
> Occasionally I want to whip up a quick script, so will just hack it together in Python
Languages like Python and Groovy were originally created to be scripting languages for quickies. Of course what starts off as a short script can easily evolve into a larger production system. Groovy's creator James Strachan based Groovy closely on Java syntax specifically to provide a seamless upgrade path from Groovy to Java when such scripts grow into something larger. He even put in runtime type tags which would become compile-time types without any syntactic changes when code was converted from Groovy to Java. Groovy was innovative beyond its peers Python and Ruby in that way, intended to be a dual dynamic language to statically-compiled Java, enabling easy conversion to the main language when required. Other languages like C# and Scala solved that issue with type inference and by adding a "dynamic" type into the main language instead.
Unfortunately after Strachan was replaced, the management policy regarding Groovy's purpose changed. All work on a spec to encourage alternative implementations was dropped, and a user-contributed plugin enabling static compilation was duplicated into the main Groovy distribution for version 2. Groovy was then pitched as an alternative to Java, competing head on. They don't mention in their marketing, however, that a mere one person wrote Groovy's static code compared to the hundreds who contributed to Java's, and or even to Scala's. Therefore adopting Groovy for static compilation is very risky, a possible cause for your huge defect rates in production.
There is a lot of interest in research on the benefits of static vs dynamic typing. But unfortunately there is not a lot of hard data. There were some experiments, e.g.
http://dl.acm.org/citation.cfm?id=2047861
which seem to support the claim that dynamic typing is better for rapid prototyping. So if you have data that correlate typing disciplines with bug rates, it would be hugely valuable to share it.
On the other hand, there's the 1994 Navy sponsored study which had as an (informal) conclusion that Haskell was better at rapid prototyping when compared to other languages of the time.
The experiment mostly compared Haskell with imperative languages such as C++ and Ada, but there was also at least one Lisp variant. There were several informal aspects to the study, not the least of which being that there were no clearly defined requirements for the system to be implemented (so it was up to each participant to define the scope), but the conclusion is very interesting nonetheless:
The Haskell took less time to develop and also resulted in less lines of code than the alternatives, and it produced a runnable prototype that some of the reviewers had a hard time believing wasn't a mockup. Many of the alternatives didn't even end up with a working system. It should also be noted that the Haskell participants decided to expand the scope of the experiment; i.e. they didn't "win" because they implemented a heavily simplified solution, but in fact added extra requirements to their system and still finished earlier!
Even though Obj-C wasn't included in the study, there were similar enough C-like languages in it, so my bet is that Haskell would have won against it as a rapid prototyping language as well.
They used Java as the static language, which is relatively cumbersome as far as statically typed languages go, and is also known for its verbosity. "Rapid prototyping" and "java" does not really mesh to begin with, or at least that's what the common wisdom tends to say.
It seems that they need another survey/study to check the OPs claim (namely do some research on productivity in languages with stronger static type systems than java).
I do agree with your points, but i think you've over generalised them into a degree that discussion is not only unnecessary but also childish.
Let's restore the context back to Swift in iOS programming to match its targeting market, shall we? Could you come up with 1 use case which:
.. generic is really useful.
.. the problem hasn't solved by well recognised 3rd party lib/framework.(by "well recognized" i mean github starred or forked more than 500.)
.. should be used in 10% of the top 100 apps on Appstore.
> Could you come up with 1 use case which: .. generic is really useful.
Errm arrays/dictionaries that return typed objects so that you don't have to cast everything from id either with an ugly explicit isKindOf test or just hoping for the best?
There is nothing that strong typing fixes that can't be fixed by just coding it right but when has everything been coded absolutely right without bugs? And even if it is coded right to start with when you make a change if you forget one rare case where it is used during your refactor you can end up with a crash in the field or with Swift a compile time error that you fix in a second.
Container types are the most obvious. Lists, Arrays, Dictionaries, Vectors, Stacks. You'd be hard pressed to find code that doesn't use a container type of some kind.
Incidentally the problem is much smaller than in a statically typed language with containers.
Compare the difference between a Java list prior to generics and afterwards. Using it in Java was an orgy of object casts. This is not the case for dynamically typed languages.
In fact, the common approaches to containers valid in a statically typed language is largely wrong in a dynamic language.
"Shitty" may not be the right word, since C#'s type system works fairly well for its problem domain. The point is that the type systems found in C# and Java are conservative and in many cases overly burdensome for the type-safety benefits you get.
As a result, when compared against purely dynamic languages, the advantages of static typing in Java and C# are not so clear cut (Hence why so many developers just use dynamic languages).
If your only exposure to static typing is in Java or C#, then you're really haven't seen a good type system at work.
I wonder if it really makes sense comparing a language that is less than one year old to others that have decades of improvements and fine tuning behind them .
If there was a perfect language, all of us would be using it: each one is the result of compromises between features, performance, readability, simplicity...
For an argument against an hypothetical ObjC3, just look the unending transition between Python 2 and 3.
I'm not sure it's right to compare it to the Python versioning issues. Apple can just break old Objective-C code for later versions of iOS/OS X if they wanted to. Also, you can look at the adoption rate of ARC when it was introduced. Pretty much every popular third-party iOS library jumped on to implement it, and those that don't usually are abandoned projects.
In all the examples he gives Swift comes with better and more succint syntax plus more safety than Obj-C, and on par with his "Obj-C 3.0" idea.
And of course it has tons of other features and flexibility he doesn't delve into.
The whole post for me boils down to his "I hate Generics" rant at the end.