Hacker News new | past | comments | ask | show | jobs | submit login
Swift Generics Evolution (timekl.com)
80 points by mpweiher on April 15, 2019 | hide | past | favorite | 100 comments



I've always thought of Swift as a sort of "sister language" to Rust, since so many features have almost the same semantics and syntax, but with some keyword names switched around and slightly different punctuation.

These proposed changes continue in that vein :)

- As the post notes, `any Shape` is the same as Rust's `dyn Shape` - both languages originally allowed you to write simply `Shape` (i.e. a trait/protocol name) to get an existential type, but later decided that was confusing.

- `some Shape` is the same as Rust's `impl Shape`.

- `Shape<.Renderer == T>` is equivalent to Rust's `Shape<Renderer=T>`.

(This is not meant as a criticism of Swift; Rust has copied some things from Swift as well. Just interesting to note.)


You may have noticed that Swift’s keywords have the same length as Rust’s. This isn’t an accident, apparently: https://twitter.com/jckarter/status/1115343308782358528


Is `some Shape` actually the same as `impl Shape`? In Rust, `impl Shape` means "a fixed, statically knowable type which implements Shape, and which the implementation of this function knows but its callers do not". I get the impression that in Swift it means "one of the possible numerous which implements Shape, chosen at runtime by the implementation, and returned to the caller as an existential".

I may have misunderstood what is planned for Swift though.


`some Shape` is literally identical to `impl Shape`, including the semantics you discuss above. The compile-time return value of a function returning `some Shape` must be the same on all execution paths. It just avoids expressing to the caller what it is.

Incidentally, this in principle allows the optimiser to specialise the caller to the return type of this function, avoiding the existential altogether.


I believe that's incorrect. Swift generics deal with statically-knowable types.

My reading was:

`some` -> Some specific type with these constraints which will be known at compile time

`any` -> Any value conforming to these constraints which may vary at runtime


In the current “opaque types” prototype implementation `some` still hides the implementation of the type across module boundaries, meaning it can be changed without recompiling clients. The specific guarantee is that (in a single process execution) each invocation of a some-returning function returns the same concrete type.


Not a coincidence. There are people who work on Swift who worked on Rust, and vice versa.


Swift is a really cool language. Quite similar to Rust, but easier to use. ( GC, no borrow checker, no multi-threading safeguards, ...)

I just wish cross platform support was a real priority. The Ubuntu packages are all there is, and much of the ecosystem is centered around (Mac/i)OS.


I don't really see how "quite similar to Rust" and "GC, no borrow checker, no multi-threading safeguards" can both be true at the same time.


They're both highly influenced by ML family of languages.


It's quite similar to Rust... if Rust literally used Arc<Mutex<T>> everywhere. And Arc<Mutex<T>> everywhere might actually have lower throughput than tracing GC! (Though it's arguably preferable for low-latency code, so it might make some sense for the typical use cases of ObjC and Swift.)


That's not entirely true, it would be similar to `Arc<T>` everywhere, because the Swift compiler and the language are by default not concurrency safe (there're runtime features to detect issues here, similar to RefCell though). This means that Rust has to use `Arc<Mutex<T>>` in order to satisfy the compiler, Swift gets away with `Arc<T>` (and I'm currently not sure, but it might even be only `Rc<T>`) because the compiler doesn't prevent it.


The advantage of ARC is that the cost of deallocating is very constant and always happens at the same moment. This helps enormously when you are doing performance sensitive stuff like scrolling over a list of complex cells.

With the Java style of GC the minimum and maximum cost and timing of GC are just not so predictable so in real-world scenarios ARC based apps feel more fluent.

It's like playing games. You'll notice the 10FPS dips more than the difference between 50FPS or 60FPS on average.


Not really.

Cascading deletes of highly nested data structures are similar to pause the world in tracing GCs.

If the way destruction is triggered does not take this into account, stack overflows are bound to happen.

Finally it introduces slowdowns in shared data structures used across threads.

Herb Sutter has a very good CppCon talk about these issues.

EDIT: Forgot to mention that Swift was the loosing language at CCC talk about implementing device drivers in memory safe languages. All the ones with tracing GC had a better outcome.


I think nothing of what you just wrote undermines what I just said.

> Cascading deletes of highly nested data structures are similar to pause the world in tracing GCs.

Which is simply difficult for every language. But even then still more predictable than Java "we'll do it when we feel ready for it" approach.


Yet all the languages with a tracing GC wipe the floor of Swift's reference counting implementation.

"35C3 - Safe and Secure Drivers in High-Level Languages"

https://www.youtube.com/watch?v=aSuRyLBrXgI

https://github.com/ixy-languages/ixy-languages

The subject of Herb's talk was actually going through all the reference counting problems to introduce in the end a lightweight implementation of deferred pointers, which isn't nothing more than basic tracing GC.

https://www.youtube.com/watch?v=JfmTagWcqoE


Well if you think a driver is somewhat similar to a front-end implementation to begin with then it's OK. If "wiping the floor" means what I see on a day by day basis comparing JVM based UI's versus the same thing built in Swift you're probably talking about something different to begin with.

I'm talking about predictable performance and guaranteed minimum performance. I don't think you're speaking about the same thing.

The video seems to be interesting anyway, so thanks for that.


Actually it is, that network packet is not going to wait.

"Wiping the floor" means getting last place against all tracing GC languages used to research the mentioned paper.

Yes, many JVM based UIs do suck, mostly because the authors didn't bother to learn how to use Swing properly by reading books like Filthy Rich Clients.

The thing with Swift UIs, is that they are actually C and C++ UIs, given that those are the languages used to implement Core Graphics, Core Animation and Metal. Additionally Cocoa is still mostly Objective-C, with performance critical sections making use of NSAutoreleasePool like in the old good NeXTSTEP days.

Coming back to Java, factory automation and high integrity systems are perfectly fine with it.

https://www.aicas.com/cms/

https://www.ptc.com/en/products/developer-tools/perc


Well try to use C++ without the C part, see how that goes. C is a part of Objective-C and you can write Swift code that will compile byte for byte into code as performant as C. It's ugly, it's unsafe but it's part of the language and if you need it it's there.

You sound like those people that complain that Objective-C is so slow compared to C because NSArray is much slower than it's C counterpart. Objective-C is always exactly as fast as C since you can always write C in Objective-C.

And if all Java based UI's suck despite the fact that it's the most popular programming language and despite the fact that it powers the most popular operating system you might start suspecting there's something wrong with it, but nope, you've read a paper and saw a video. About a kernel driver. OK.


The funny part is that the languages with tracing GC that wipe Swift's memory management on the paper aren't even Java, rather OCaml, Go and C#.

Your focus on Java, without spending one second reading the paper benchmarks, from a well respected researcher in the CCC community, just demonstrates a typical defensive reaction among reference counting proponents.

Using C++ without the C part is the whole point of modern C++ and the Core Guidelines. In fact there are several benchmarks where the C++ version gets better optimized.

As for Android, it isn't Java as we know it, and any travel through /r/androiddev will teach Google might have tons of PhDs, but they certainly don't work on Android, given how a lot of things are implemented.


I'm kind of baffled you never got back to the first statement I made. It's like you got totally lost in your mission to prove that Swift sucks. I never claimed magic performance in Swift, I said ARC might be slower but it's a lot more predictable with a better minimum performance.

The only thing I see is "here's a ton of text and video to slough through made by people smarter than me", nothing concrete that undermines my original statement in any way.

And yes C++ cannot exist without C as it is an extension to C. You can write pure C in a C++ file and it will still work. You can write pure C in an Objective-C file and it will still work. You can write some ugly bastard syntax version of C in Swift and it will still work.

I'm very much willing to accept Swift's performance for a device driver (possibly the first device driver ever written in Swift) is worse than in many other languages if you stick to the safe bits of Swift. It just doesn't have much to do with what I wrote.


They both have similar goals as far as mitigating certain classes of runtime errors through static checks. Both have nice type systems, and features like explicit mutability and nullability.

Like you say the approach is pretty different: when it comes to the tradeoff between safety/performance and developer ergonomics Rust leans toward the former where Swift leans toward the latter.


Kotlin/Native might turn out to be that, but it depends pretty much where JetBrains is willing to take it.

In any case, as rule of thumb, systems languages that come with an OS/platform usually win in the long run.


If GC means garbage collection here, Swift definitely doesn't have that.


Sure it does, reference counting is a GC implementation algorithm from CS point of view.

The Garbage Collection Handbook, chapter 5.

http://gchandbook.org/

Just one key CS reference for compiler writers among a few others.


>, reference counting is a GC implementation algorithm from CS point of view. The Garbage Collection Handbook, chapter 5.

Yes, that GC book always gets cited to educate everyone on terminology but it never resolves or finalizes how "garbage collection" is actually discussed in regular conversations. I made previous comments on why citing that book just adds to the confusion of how people typically communicated that rc != gc before that GC book was published.

https://news.ycombinator.com/item?id=11764887

https://news.ycombinator.com/item?id=7866732


There are plenty of CS papers referring to reference counting as a GC implementation algorithm since Lisp exists.


By that measure C++ and Rust are a GCd languages because they have smart pointers.


No because in Rust and C++ using smart pointers is an explicit choice that is applied manually when required and is present both in the type signature everywhere a reference counted type is used, and in the construction of values.

In Swift the reference counting is a implementation detail of automatic and transparent memory management. (that you only really have to care about when you have cycles)

That's why Swift definitely counts as a GC language for me.

Sure, if you put things behind a Arc<T> or shared_ptr you use reference counting in C++ and Rust, but it is not an inherent feature of the language.


In Rust, using smart pointers is not much of an "explicit choice" - it's simply the idiomatic thing to do when something might be kept around by more than one parent object.

Rc<T> basically means that there are multiple sources of control over T's lifetime, but these are all within a single thread; and Arc<T> signals that the control might extend to multiple threads. It's not "mere implementation" that we're dealing with here; it's the very semantics of the code as it relates to Rust's expanded take on the well-known RAII pattern.

Swift simply lacks an equivalent to either the Rc<> specifier or e.g. Rust's Box<>, which expresses the semantics of an object which is heap-allocated and accessed via an indirection, and verifiably has at all times a unique "owner" controlling its lifetime (as per usual RAII).


You can argue it’s an explicit choice because you can often choose to structure your code in a way that doesn’t require Rc. Of course, it exists because sometimes, you do have to or want to, but those cases are pretty rare in my experience.


ARC only applies to reference types. The programmer has acess to value types, and manual memory management if they want it.


Yes and no. "No GC" means "no mandatory GC when data isn't shared between threads".


This is why I say memory management isn't "either GC'ed or manual", because it's a continuum between "fully manual" (runtime hands you a slab of bytes, you do the rest) and "fully automatic" (you never worry about anything, not even cycles in the values, and it all "just works"). Between those two extremes there's a lot of gradation, not to mention the fact that all schemes can generally be mixed within a single program, language/runtime permitting.

Thus, arguing whether some particular middle point is or isn't "garbage collection" isn't anywhere near as useful as people seem to suppose, especially if it's being argued in a context where "GC is morally bad in all forms" or something like that. It's all a bunch of tradeoffs and there is no single perfect answer to all problems.

Note that even most "manually allocated" languages don't actually fall into my manual extreme; generally something more granular and automatic than that is offered. However, while nothing except arguably raw assembler defaults to that "fully manual" allocation, it's a useful last-ditch option in a lot of places, often wrapped up with just a touch more automation into something called "arena allocation", where you don't care about where the arena lives in RAM per se and it integrates with the rest of your allocator otherwise, but is just a big slab of bytes otherwise. Even in the GC'd languages I've seen "just give me a big slab of bytes and go away" used, even if it has no formal support.

Personally, I'd consider "reference counting" as a "garbage collection" scheme if the references are counted automatically, and as manual memory management if you're in a context where you have to manage them manually, but YMMV. I break it down that way mostly because in the automatic case, you get the general advantages of automation, in that it's largely correct but often somewhat slower (because it can't elide anything), and managing them manually permits more sophisticated schemes but also is massively error prone (to the point I wouldn't use it for anything anymore; the benefits are available from other techniques and the costs are insanely high). So personally I'd go with smart pointers being an embedded method of garbage collection in a language/runtime that generally is written for manual memory management. It doesn't make the outer language a GC'ed language, nor is it some sort of betrayal of the manual memory management ethos or whatever. Real programs in C++/Rust at scale will tend to use lots of memory management techniques, many of them with high degrees of automation, but the language and runtime themselves are generally manually-managed. It doesn't matter how many smart pointers your program uses, arena allocation is always an option for your next bit of code.


> in the automatic case ... it's largely correct but often somewhat slower (because it can't elide anything)

Not entirely true; in the case of ARC in Swift/ObjC a lot of effort was put in to optimizing retains and releases wherever possible. The de facto standard platform for ObjC (Apple's) had a strong set of conventions around memory management already. They were formalized in such a way that some of the defensiveness required in manual retain/release could actually be proved unnecessary in ARC code. (A few semantic changes were made to ObjC as well to support ARC.) Not in all cases, of course, but also not in none.


I guess that effort is still not good enough.

"35C3 - Safe and Secure Drivers in High-Level Languages"

https://www.youtube.com/watch?v=aSuRyLBrXgI

https://github.com/ixy-languages/ixy-languages


Interesting, thanks! Added to my to-read list.


When people say GC in computer science, they don’t typically mean reference counting. Reference counting is an automatic memory management scheme along side GC, not inside it. One way to think of it is that if it lacks a separate GC phase and can’t deal with cycles, it isn’t GC.


Nah, that is mixing GC with tracing GC, because it is too much to type/say an additional word.

Lay people also use wrong terms when talking about other scientific fields, that doesn't make them right by quantity of use.


I don’t think McCarthy had reference counting in mind when he coined the word garbage collection. I think it’s more likely that people say garbage collection when they actually mean the more general term of automatic memory management. No one would ever mistake arena based memory management as a form of garbage collection, for example, even though it is obviously a form of automatic memory management.


Swift doesn't actually use garbage collection, it uses a variant of Objective-C's Automatic Reference Counting (ARC) which was introduced a few years prior to Swift.


If we’re being pedantic, Automatic Reference Counting is a form of garbage collection.


Working with Generics in Swift often made me feel I was missing something, pushing me to refactor my code once things started pushing past the trivial usage of generics because types started fighting back. Indeed I noticed that the requirement that types are declared “from the outside” made things harder.

Another thing is combining protocols with generics. That feels a lot more convoluted than might be necessary.

Seeing people express exactly what is the problem and what could be a solution is impressive. I hope we’ll see reverse generics soon.


Whenever possible, I like using tech that I can mostly understand.

A small simple language is something Swift is not.


Swift aims to be useful even if you don't understand parts of it by the principle of progressive disclosure.


And that is an interesting experiment and perhaps a worthy goal, but having used Swift for a few years now, I don't think they succeeded at this. I'd say it demonstrated that "progressive disclosure" may be a great concept for games, or apps, but there's just way too much difference between someone new to programming and a professional programmer for a language to cover all these cases. Apple doesn't even have a non-pro Final Cut any more.

Even in my first few days, having read Apple's Swift documentation, I had to continuously resort to StackOverflow for answers, which frequently sent me to the language grammar (which had many mistakes in it, and I think still does), and the Swift bug tracker. The inflection point on the learning curve is right after "hello world", which makes for great demos, but that's it.

Personally, I think even Common Lisp does a better job at progressive disclosure. There's huge areas of CL that you can simply ignore. I worked in CL for years before I bothered to learn about conditions, special variables, or 90% of LOOP's features. In Swift, almost right away I had to read everything related to errors. You can't ignore it, except in the simplest program.

I studied generics in college, as part of my data structures class, or maybe my compilers class. I've watched Alexis Gallagher's "PATs" video at least 3 or 4 times. I still don't understand Swift generics. Or maybe I understand what tools are provided, but I don't understand why you'd build (or want) a tool that makes it so hard to, say, define a new type with an "isEqual" method to determine if it's the same as another.

Every Swift programmer hits the "protocol can only be used as a generic constraint" phase pretty quick. My complaint with Swift generics was never that they're too simple. It would be great if this were just a corner that PL/type-system geeks could geek out on, but it's an area that you can't ignore.


Hardly. Trying to read someone else’s code is “smack you in the face with a book” disclosure.

Go/C# pose no issue but things like c++ cause total wtf where other people’s taste/habits become mutually unintelligible dialects.


What’s your ideal simple language?

I’d argue that with type inference, immutability, and non-null as a default, Swift is an easy language for anyone to be productive quite quickly.

    let x = “I’m immutable and not null”

    let favorite = [“Java“, “Perl”, “Swift”].shuffled().first


Why is `shuffled()` a function and `first` a property? Seems weird and not simple.


While you are free to implement functions and properties however you like, usually it's pretty clear which one is which. shuffled performs non-trivial computation and doesn't seem like an intrinsic property of the array, so it's a function; the opposite is true for first.


Also on an immutable array, `first` should always return the same value where `shuffled()` may return different values each time it's called.


If it anything like C#, .shuffled needs to return a new object and has some expense incurred to achieve, so is a function/method, but .first is immediately available, requires only a trivial amount of calculation and does not return a new array/list, so is a property.


shuffled() can accept randomness generator, this form just uses the default. Also .shuffled() returns a new array. For in-place shuffling there is .shuffle(). This pattern is consistent: verb() is for in-place actions, verbed() if for actions returning a new collection e.g. sort()/sorted().

Btw, there is also .first(where:) which accepts a predicate.


Contrast that to Objective C < 2, Ruby, or Smalltalk where everything is a message sent to the object and there are no exposed, public fields.


You're misremembering. Prior to ObjC 2, classes indeed exposed fields. Convention was that you should not touch them and use accessors instead, but they were most definitely there, declared in the `@interface`.

    @interface Foo : NSObject
    {
        NSString * name;
        NSInteger count;
    }

    - (NSString)name;
    - (void)setName:(NSString *)newName;

    - (NSInteger)count;

    @end
(This is still legal, of course, but bad practice now that there's `@property` synthesis as well as ivars being declarable in either an extension or the `@implementation` block.)


A couple reasons, which are used consistently throughout Swift:

Shuffled may accept arguments. First does not

The runtime cost of first is nearly zero, whereas shuffled is doing significant work (constant versus linear time)

It sort of doesn’t matter because autocomplete won’t let you do the wrong thing.


Yeah first being a property is weird to me too.

Although I do believe Swift has "virtual properties" that are really a method under the hood.


The reverse seems weird to me. Things like the first element, or the count of an array seem conceptually more like properties than functions: they're just values derived from the data structure.

The fact that you need to use function call syntax for things like this in other languages seems like it is unnecessarily exposing implementation details to the API because of limitations of the language.


For me it's exactly the other way around.

Having "first" as a property feels like exposing implementation details to me. Sure, a array/vector is a contiguous slice of memory filled with items of a certain size. But the vector/array is is basically a pointer to the start of the memory + a size, so "first" is not a inherent part of the data structure, rather a computed/derived property.

When I see a property I think member of a struct, which is not the case here.

(ps: this is all really just hair splitting, it's a minor difference that you would get used to pretty quickly either way)


It's a reasonable point. But here `first` is actually declared in the `Collection` interface, which `Array` implements. There's no guarantee about what it's actually doing. (Although possibly it has a documented expectation to be O(1), can't remember at the moment.)


Collection's startIndex and subscripting is supposed to be O(1), so I'd assume that first is also supposed to be O(1) since it's trivial to implement it by composing the first two (in fact, this is how Swift implements it by default if you don't override it: https://github.com/apple/swift/blob/e08b2194487d883896a377a0...)


Or maybe they just opt for it for the simplicity sake? Just like said above, the first is just a magic getter which is a function underneath anyway.


But why should I as the user of an interface care whether something is implemented as a function or whether it's just a reference to a pure value?

For instance, let's imagine in some C++-like language I had two list implementations: one static and one dynamic:

    class StaticList {
        ...
        int count;  // set at initialization time
    }

    class DynamicList {
        ...
        int count() {
           ... // compute the count dynamically
           return count;
        }
    }
Here `count` is conceptually the same between these two implementations, but if I want to get that value, I have to access it differently:

    int c1 = staticList.count;
    int c2 = dynamicList.count();
But that difference is essentially an implementation detail which means nothing to the client. The Swift way just lets me express my interface however I decide is most fitting.


There is a useful distinction between "properties" and "functions" that appears in some languages: properties exist in memory and so can be addressed. This is particularly important for "systems"/low-level programming, where addresses are often manipulated directly and need to be stable/controlled.

In languages like C, C++ and Rust, properties are things that are definitely in memory, and functions/methods are things that may not be. If you have an interface like `count` that may be dynamic, it should be uniformly expressed as a method/function (that may have a trivial "return count;" implementation).

On the other, Swift has to put a non-trivial amount of infrastructure into making sure all the things with property syntax can behave as if they are backed by memory, so that an address/pointer can be generated for them (in the general case, a temporary pointer, that becomes invalid quickly).


The reason that performance-oriented languages like C++ and Rust make a distinction there is to make it easier for someone reading the code to understand the performance implications of a piece of code. Accessing a field involves jumping to a statically-known offset, which is a minor and predictable cost, whereas calling a function could have any cost imaginable. It's reasonable for languages that prioritize ergonomics over extreme performance to make the decision to paper over that distinction.


My ideal simple language is Java 1.4 without checked exceptions.


Then you can use Go instead.


Go does not have exceptions.


Sure it does, a primitive version via panic and recover.


That's a poor substitute for exceptions IMO. Also almost all libraries including standard library use another approach via return error objects, so if one would want to use Go with panics as error handling, he would need to rewrite or wrap at least standard library. I would stick with Java at this point. Its library is terrible, but usable in the end.


Certainly not one with optionals. I still don’t see the need for it. I can check for nullability myself when it’s required.

Genetics are indeed useful but such a can of worms that are not worth it IMO


Optionals are like having a strong type system to me: they used to feel superfluous and unnecessary, but once I got used to it, I’m happy to have my compiler ensure that aspect of my code is correct rather than finding out I made a mistake from intermittent runtime errors.


Yeah. I don’t like strongly typed languages.


Most languages have optionals. Swift is one of the few that has guaranteed non-optionals. Java and C(++) have null pointers but no way to syntactically declare something is non null and have the compiler confirm such


That works only until what you can "understand" and "want" changes.

I really like to learn everything about every tech that I use, bordering on obsession, and after a few solid times that I hit roadblocks or had to fight with the tool because it is too "simple", I started to grok the difference of "simple" and "easy".


> the difference of "simple" and "easy"

Don't know if you were already referring to Rich Hickey's talk on this, but if you weren't, it might appeal to you. Simple Made Easy: https://www.infoq.com/presentations/Simple-Made-Easy

"Okay, the other critical thing about simple, as we've just described it, right, is if something is interleaved or not, that's sort of an objective thing. You can probably go and look and see. I don't see any connections. I don't see anywhere where this twist was something else, so simple is actually an objective notion. That's also very important in deciding the difference between simple and easy."


I don't get the motivation behind this. I assume Swift has subtype polymorphism, so why not do this:

  func allEncompassingShape() -> Shape {
    return SpecialGiantShape()
  }
This function returns some type that may be a subtype of Shape.


Because that does not work with generic protocols. You can't have a function return a sequence:

    func make_numbers() -> Sequence<Int> {
     // Cannot specialize non-generic type 'Sequence'
    }
    func make_numbers() -> Sequence {
     // Protocol 'Sequence' can only be used as a generic constraint because it has Self or associated type requirements
    }
Whereas with this change you probably could at some point do something like this:

    func make_numbers() -> some Sequence<.Element == Int>


What about

  func make_numbers() -> Sequence<Any>
This should work as long as Sequence is covariant on Element. You could return any possible Sequence here, like Sequence<Int> or Sequence<Double>. The user of the function wouldn't know.

You could also do something like

  func make_numbers() -> Sequence<Comparable&PartialOrder>


What about being able to make a ClosedCountableRange from that sequence? That can’t readily be done with an Any type without some type checking. Specifying an Int element type should automatically make it easier for you.


I think your confusing Swift's protocols with something akin to Java's interfaces. I don't really know Swift, but it looks a lot like Rust, so I'm going to assume that they work the same.

In Java, the following function definition compiles just fine:

  IShape getShape() {
      return new Rectangle(20, 40);
  }
And, assuming that `IShape` has a `draw()` method, you can write:

  IShape shape = getShape();
  shape.draw();
It works because there is dynamic dispatch occurring at runtime: the JVM will look for the implementation of `draw()` in the `Rectangle` class (not sure of the exact mechanism, but that's the idea), and call it. To find it, the value (here, shape) must holds a reference either to its class so the JVM can go and look for the implementation, or to a table of all methods it implements (I think it's the first). So the compiler code doesn't know the layout of the concrete type used, it just add instructions to go look for the implementation at runtime. But that's fine, because any Object in Java is in fact like that: a fat pointer, containing a pointer to the data, and a pointer to the implementations.

I won't work in Swift because, as far as I know, there is no dynamic dispatch, at least by default. When you write the following:

  func render<T: Shape>(_ shape: T, at point: Point) { … }
The compiler will know what is the concrete type of `T`, and so will use its implementation of the `draw()` method. It won't be found at runtime, it is known at compile time. What this means is that a protocol is not a type. This is the important part. An interface in Java is a type, because it doesn't dictate how the method is called, it allows the concrete type to live under the hood, and call the right method at runtime. A class in Swift is a type because you know how to call it, how to access it. But a protocol is just a set of constraints on a type, not a type by itself.

That's why you need the `some` in return position. Well, you don't really need the syntax, but it helps understanding the difference with the "same" Java code. The `some` keyword says that the function will returns some type that will implements the `Shape` protocol. It will in fact return the concrete type, but this is not included in the type signature, so it can change, it can hide implementation details, without making a breaking change in the API. It also means that the following won't compile (I'm not sure here, but it works that way in Rust):

  func union(_ leftShape: some Shape, _ rightShape: some Shape) -> some Shape {
      if isEmptyShape(leftShape) {
          return rightShape
      } else if isEmptyShape(rightShape) {
          return leftShape
      }
      return Union(leftShape, rightShape) 
  }
Because here, all code path don't return the same concrete type. If you want different concrete types, you'll need the other keyword, `any`. `any` is in fact a lot like the interfaces in Java, because it uses dynamic dispatch under the hood (if it works like it does in Rust). The compiler will know how to turn the concrete type into the dynamically dispatched one.


Generic functions in Swift are compiled separately from their callers. Protocol requirements called on values of generic parameter type are dynamically dispatched via a witness table passed in under the hood for each generic requirement.


I haven’t followed C# for a long time, but i remember the type system beeing both obvious and powerful. Could anyone here explain what is swift trying to accomplish here that just copying the C# way wouldn’t be enough ?


I see two use cases, Swift is a precompiled language unlike C# or Java so a call to a method through a protocol (an interface) can not be inlined. Let suppose you have an interface IList and an IEnumerator,

  interface IList<T> {
    IEnumerable<T> iterator();
  }
it means that despite the fact that you may know the exact implementation of the IList, you have no idea of the exact type of the method iterator() at compile time because it's hidden behind an interface.

Enter the existential types,

  interface IList<T> {
    type IEnumerator<T> Iterator;  // this is a type declaration

    Iterator iterator();
  }
now if you have an implementation of IList, you will have

  class FunnyList<T>: IList<T> {
    type FunnyListEnumerator<T> Iterator;

    Iterator iterator();
  }
so at compile time, when you use a FunnyList, the compiler knows the exact type of the method iterator().

The other usual use case, which is a restricted version, is to be able to specified the variance at use site, like the wildcards in Java (List<? extends Foo>) which is a way to specify that you don't want a method to be parameterized but you want a type of a parameter to be covariant

  void foo<T:Foo>(List<T> list)  // the method is parameterized

  void foo(List<some Foo> list)  // the type is parameterized
C# doesn't have variance at use site, only at definition site with 'in' and 'out', Kotlin has both. Scala and C++ with type name form.


C# and Java also have AOT compilers available.

In fact that is their execution model in UWP, Unity (Consoles, iOS, Xamarin (iOS), Android and some embedded deployments.


Is this describing something similar to how Haskell's return types work?


How so? I know Haskell has pretty extensive type inference, but I’m not sure exactly how it relates here.


Swift is so awesome! If it was simpler to start a non-UI project and compile time was shorter I would have definitely joined onboard.


what makes it awesome? reading the spec nothing sticks out. I often wonder if it's just ObjC survivor bias but if there is anything that makes it actually special I'd love to know.


> I often wonder if it's just ObjC survivor bias but if there is anything that makes it actually special I'd love to know.

It's definitely not just this. Swift is very similar in design to Rust, and has a lot of the same goodies like an ML-inspired type system, but without the complications imposed by the borrow checker / lack of GC.


Light syntax, algebraic datatypes, type parameters, very solid enumeration, the whole ObjC library ecosystem, semi-decent (compared to Haskell/OCaml et. al) pattern matching, immutable and mutable data structures (Array vs ArraySlice for example).


Don't forget the whole C library ecosystem, which IMO is much more useful.


This can be somewhat annoying to work with in Swift, though it’s gotten better recently.


It's not perfect, but it's loads better than using something like JNI.


JNI was made that way on purpose, because Sun wanted to discourage native code, as Mark Reinhold already mentioned a few times on Java conferences.

Panama will thankfully fix that.

https://jdk.java.net/panama/

Just don't expect it to ever be supported on Android.

But they surely will,

https://en.wikipedia.org/wiki/List_of_Java_virtual_machines#...


Yeah, I can see it being a whole lot worse ;)


It has (IMO) pretty and consistent syntax for many programming paradigms. Most of the language decisions seem like they were given significant thought. Swift is a great language to read and write because it strives to be pleasant for both cases.


It's a decent language, if nothing particularly special. It's like a slightly fresher Java or C#. Survivor bias and standard-issue Apple fanboyism give it some extra shine.


> If it was simpler to start a non-UI project

You can do it in one command:

    $ swift package init --type executable


And then to make that project openable in Xcode:

    $ swift package generate-xcodeproj




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: