Swift is not perfect but I've been writing it a bunch recently and it's hands down one of the most productive languages I've written in. Fundamentally I believe that it might be one of the most important languages because it's relatively close to the metal, it's a 'modern language' (i.e. functional, has a pretty good type system), open source, and a major company and platform behind it. I think that if Swift 4.0 adds concurrency to the language (as hinted), that we could see it defeat Go on the web.
As a longtime lisp developer I thought I'd love the REPL, but so far I really haven't had much use for it. I find myself using playgrounds quite a bit though.
Swift might be "functional" (which means it has immutable values I guess?) or have a "pretty good type system" (which means it has generics?) but for a supposed "most productive language", I've never heard anyone mention how short the standard library falls.
I decided to start using it this week and after learning the requisite umpteen fiddly syntax idiosyncracies was startled to discover how little Swift supports you where traditional platforms (Java, C#, Ruby, Python, Node, Go) you got for free: a dang HTTP server, for example, requires you to write several[0] thousand[1] lines[2] of[3] code[4].
For a batteries-included language, Swift has a long ways to go before thinking about "defeating Go on the web".
> "functional" (which means it has immutable values I guess?)
No, it means that it's a functional language. Please consult google if you are not sure what that means.
> "pretty good type system" (which means it has generics?)
No, it means that it has a more advanced type system than any of the languages you mentioned. It's most likely the most popular language with a type system that can be considered somewhat advanced.
There might be idiosyncrasies but like compared with say JS or whatever it's still miles ahead.
Yeah, the standard library is somewhat lacking but like that will be fixed. The language is very solid though.
Also don't judge a language by lack of an HTTP server in the standard library.
Please inform me what Swift's type system has over, say, Java as I don't know of anything and neither does a cursory Google.
The lack of an HTTP server is only an example. Let's see you parse JSON, or stream a Unix socket, or any of countless basic things that even Node includes in its standard lib. Swift doesn't really have much of anything except some types and traits: https://developer.apple.com/library/ios/documentation/Genera...
The Foundation libraries provide that functionality, they're not core to the concept of a programming language, so they're not part of the minimal standard library. There are also numerous community provided implementations.
One way that Swift's type system is more advanced than Java is the ability to define extensions to protocols (interfaces) constrained to specific types, i.e. you can add "average" to "SequenceType where Generator.Element == Double". You can also use this to provide default implementations for protocols, but only in the case of specific associated types.
Perl 6 did some work on this. I'm not sure if it's fully supported yet, but the design docs[1] go into detail on how it's supposed to work, so it may be worth looking at.
> Method Cascades
I suspect you might find disambiguating your method/attribute access easier in your "with" syntax if you make it match what swift appears to do for anonymous variables, use $0. (pardon my unfamiliarity with Swift, I may be mistaken on some obvious things).
E.g. Instead of
with let task = NSTask() {
launchPath = "/usr/bin/mdfind"
arguments = ["kMDItemDisplayName == *.playground"]
standardOutput = pipe
launch()
waitUntilExit()
}
do
with let task = NSTask() {
$0.launchPath = "/usr/bin/mdfind"
$0.arguments = ["kMDItemDisplayName == *.playground"]
$0.standardOutput = pipe
$0.launch()
$0.waitUntilExit()
}
That way if you want to use some non-related variabled within the block, or the output of an attribute or method as the input of some other attribute or method, it's not ambiguous.
Okay, then I don't really see a reason for Erica's syntax,which is special and either ambiguous or limited. How do you know whether you are calling a method on the given object, or a plain function? How to you know whether you are using an attribute of the object or a variable? Just rely on scope? That seems far more problematic and error prone for the small convenience of not typing three more characters with object access.
Your solution is actually simple enough that I'm not sure there needs to be a change to the language, unless there's some special behavior they can and should impart that we aren't thinking if.
That said, I don't write swift, so feel free to take my opinion for whatever you think it's worth. :)
I'm not sure if the function call could be optimized out across module boundaries, that's really the only justification for this to be dedicated syntax I can see. It's somewhat obsoleted in Swift 3 regardless as SPM uses static linking, where it should be able to be optimized.
Aside from the ';', no special syntax needed. And unlike other Smalltalks, there is a parallel with '|' for composition instead of cascade (so send the next message to the result instead of the receiver of the first expression), which helps disambiguating keyword syntax without requiring parentheses:
This is especially useful when building expression incrementally, as it doesn't require going back to the start of the expression. Of course, stringByAppendingString: isn't such a good example because the ',' message makes this quite a bit more compact:
Error and Result types in the standard library, macros, per-target configurations and tests, a fix for namespacing issues due to the lack of a central repository for open-source libraries. The presence/lack of each of these are some of my favorite things about Rust, and it's too bad to see them miss this release of Swift.
Swift's error handling model essentially tosses type information out the window. I really don't understand why it was created instead of a first-party Result type.
There has been discussion about statically typed errors on the evolution mailing list — some core team members said they may look into in the future and others seeming more doubtful of its usefulness.
Why do you think a typed error handling model is better than introspecting the error at the catch site, I’m not sure I buy it?
Errors should be Just Another Value, so the difference between untyped errors and typed errors is the same as the difference between just using Any for every parameter and return value, and using actual types.
You can use Any for everything, and it can even be totally safe if you use as? correctly, but it's obvious why no one does that. It's not obvious to me why people are fine with that situation for errors - again, the catch-all handler issue mentioned above, and the bizarre special-casing of NSError.
First-class error handling is a little more ergonomic than just having a `Result` type that is used by convention, so I can see why Swift went that route if they were looking to improve ergonomics a bit. Rust has been gradually working on making its own `Result` type a bit more first-class, first with the `try!` macro, and in the future with the `?` operator which will improve upon `try!`.
func throwing() throws (FooError, BarError) -> Int
to create an implicit sum type, I'd find it acceptable, but not being able to tell what types of errors an API will throw is far from ergonomic - you'll need a catch-all block if you use anything but NSError[1] or ErrorType, even if you know the function only throws FooErrors.
[1] this is especially weird because NSError is just some random Foundation class, it's not anything inherent to the Swift language.
One reason to avoid explicit lists of "error types" (exceptions) for methods is that method parameters are usually covariant whereas you really want exceptions to be contravariant[1]. This has big implications for higher-order functions and it's why everybody rightly hates explicit "throws" clauses in Java.
[1] Or to put it in more practical terms: Imagine two classes/interfaces A and B where B subclasses A. Any method on B that overrides a method on A is free to accept a "more restricted" parameter than the method on A, but since it (presumably) does something more specialized it must also be able to throw more exceptions that A's method was. (Maybe it's accessing files and needs to be able to throw FileIOError, or whatever.)
You have the variance bit totally backwards. Methods are contravariant on their input and covariant on their results. A method which can only feed cats is not a method which can feed all animals. A method which gives me a cat, well, that is certainly a method that gives me an animal. Exceptions are another type of result. Please see https://en.m.wikipedia.org/wiki/Covariance_and_contravarianc... for more info.
I agree with your assertion that this is part of what makes explicit throws a pain, though. I hadn't thought about it that way, thanks for the insight.
Idiomatic Swift shouldn't really be using inheritance though, protocols (which support associatedtype, making varying errors types not an issue) are generally preferred.
Functions would be free to just "throws ErrorType" (probably the default for blank "throws", to avoid breaking backwards compatibility) if they really wanted to though, just like they can accept and return "Any", and the behavior would be identical to today.
I prefer Result though, which just solves the problem of disjoint error types by using "mapError" and a sum type.
I've seen interest on the lists for modifying the model to allow a single failure type to be specified. I hope someone will write up a proposal once August rolls around and the lists are open again to new feature ideas.
Sorry but final by default is not a good idea IMO.
All my work is client work. There have been a bunch of occasions where I've had to work around some weirdness of an Apple SDK by creating my own subclass. I could see Apple releasing an SDK and many of the classes being @final. Then I'm stuck in a situation where I can't provide a deliverable to the specifications a client wants due to a limitation in an SDK.
That's my only fear though really. Rarely do I personally do a lot of subclassing unless I absolutely have to or it really does make sense to.
Apple's ability to release a Swift-based SDK that uses "final" and "final" being the default for classes in Swift are completely orthogonal concepts.
Apple could (and should, I think) release future SDKs that use all-final classes and require composition instead of the incredibly fragile base classes that currently make up UIKit and Foundation - they'd just add the "final" keyword.
Since UIKit is inheritance-based currently, in a final-by-default Swift, it would be imported as "subclassable", "nonfinal", or whatever. Nothing would change other than the default for newly-written Swift code.
It would be nice to have an "evil_override" keyword (or some other such name). There's no fundamental technical reason you can't do it, with unsafe pointer black magic if necessary, and it's sort of like an LD_PRELOAD: it's not a good solution, but sometimes you have to break abstraction barriers to get things done, and at least if you're using it you're explicitly saying you're breaking abstraction barriers.
final by default doesn't prevent someone from manually adding final to their classes.
Idiomatic Swift is primarily protocol-based, object-orientation is mostly used when talking to the Cocoa frameworks, which aren't idiomatic Swift code.
Back in the day, with Smalltalk, a programmer could override everything. If you did something that broke derived classes, you were simply being a bad programmer. You simply didn't do that, and if you couldn't deduce if your change would do this or not, you either had a badly architected system, or you were being a bad programmer.
This is how it should work in many production environments: Are you 100% sure about that? No? Don't do that! Then start asking why you can't be sure, then fix that. Rinse, repeat.
"final" allows the compiler to strictly enforce that "don't break things" idea, instead of delegating it to fallible humans. (it also lets the compiler make your code faster)
By using the tools that Swift provides - preferring value types, and falling back on final classes, I can much more easily deduce what my changes will do.
Non-final classes create an additional public API that framework authors need to support - the ability to change any behavior. Reducing the surface for potential errors makes frameworks and their clients more robust.
By using the tools that Swift provides - preferring value types, and falling back on final classes, I can much more easily deduce what my changes will do.
No disagreement here.
Non-final classes create an additional public API that framework authors need to support - the ability to change any behavior. Reducing the surface for potential errors makes frameworks and their clients more robust.
Since Smalltalkers knew all their code was "surface," there was motivation to keep things very encapsulated. (Perhaps this is part of why the Law of Demeter was so big in that programming culture.) Synergistic with this, was the heavy use of the very powerful debugger. If your codebase was mostly relatively stateless or very well encapsulated, you could time-travel with ease in the debugger by unwinding the stack, recompile your method in place, and continue on. Conversely, if you wrote code that didn't have those qualities, your fellow programmers would get annoyed at you for making their lives harder and their tools much harder to use.
Increasing the surface makes frameworks more flexible and necessitates good design throughout. Is there a trade-off? Sure. The really good Smalltalkers spent lots of time reading code and exploring stuff in the debugger/browsers. And sometimes, you could be stymied because you couldn't rule out stuff and risk a blow-up in production. And to be fair, in my estimation, Smalltalk projects were less robust -- but got fixed really quickly.
Nowadays, I think the sweet spot would be in a simple language with super fast compile/edit/test cycles, with equally powerful debugging, and with type annotations.
I don't have any experience with Swift, but how to you approach unit testing with everything `final`? Being able to inject mock subclasses is a staple of OO testing.
I don't know Swift, but I know I testing... your constructors should be accepting interfaces (for things that are complex enough that you'd want to mock them), not concrete types. In Swift I gather you'd use protocols.
Shrug... at least in the Java world, modern mocking frameworks render obsolete the J2EE-era "interfaces for the sake of interfaces" boilerplate. On the other hand, none of them work if the class you're mocking is `final`.
Protocols and generics cover everything you need to do that. Idiomatic Swift makes little use of classes to begin with, it's mostly structs, enums, and protocols, none of which allow inheritance.
Once Swift has achieved its goal of readability and the lexicon has stabilized, it'll be nice if they [re]introduce a more compact syntax, that you could opt into on a per-file basis, say by setting ".swifter" as their extension.
Bring back ++ and -- and currying, even if just as syntactic sugar that can be converted into the regular syntax by a tool, for those choose to write in it.
I appreciate Swift's philosophy and agree with their decisions so far, but there's a certain beauty to concise code (when it feels like writing maths) and it'll certainly make prototyping in Playgrounds more fun and, shall we say, swift.