One thing I don't see in this list (though maybe I'm missing it) is the ability to expose a Swift function as a function compatible with the C ABI. Something like extern "C" in C++ or no_mangle in Rust.
You can pass @convention(c) blocks around as C function pointers, but you can't currently call directly into a Swift library from C code. That makes it difficult to do things like expose platform-specific functionality to a shared C library using Swift.
Right now you have to implement the C function inside an Objective-C file, and then have your Objective-C code call your Swift code, or you have to change all your interfaces to allow injection of function pointers for all external dependencies.
It's the kind of thing that makes staying with Objective-C attractive for that type of stuff, but Swift in general has been a solid improvement, and I'd like to not have to give that up.
Seems to be being worked on, an attribute @_cdecl has been merged into master [0] and seems to compile in the current beta of 3.0 [1] although I haven't tested it.
You're welcome. I have a custom mouse cursor a11y app that has to use some nasty old C callback API, which is exactly the PITA you describe, so I've been keeping an eye out.
Yes, I was hoping that 3.0 would allow C++ and Swift to call each other. Have spent a few thousand hours writing a reactive ontology in C++ that currently calls to and from ObjC.
I was asking about C++ support in the WWDC 2015 labs, and the answer was using an ObjC wrapper workaround. As a result, I put off a major port for "next year".
" ... However, C++ itself is a very complex language, and providing good interoperability with C++ is a significant undertaking that is out of scope for Swift 3.0. "
Rust did too. There's probably a pie chart somewhere of time lost to having to type out "foo += 1" vs number of hours lost to people screwing up operator precedence for foo++ or ++foo. (EDIT: as munificent points out this is actually a combination of operator precedence and sequence points)
The pie chart is probably severely lopsided.
EDIT: Since the main problem is the precedence of the post increment/decrement operator, I would have preferred to see them get rid of that one and keep ++foo but overall not much of a loss.
At least they didn't get rid of the ternary operator like Go did...
It's not about operator precedence, it's about sequence points. Assignment—including in ++—is a side-effecting operation. An expression form that does side effects can lead to very confusing code like:
foo(a++, a++);
What values get passed foo()? In some languages, it's undefined (!). In others, there is a well-defined answer, but even there the answer may not be intuitive. Removing assignment expression forms avoids this. Note that even "=" is a statement in Go.
It does sacrifice a little expressiveness. But in languages that have dedicated syntax like ranges for iterating over sequential numbers, the ++ and -- operators end up pretty rarely used so it's no big loss.
> It does sacrifice a little expressiveness. But in languages that have dedicated syntax like ranges for iterating over sequential numbers, the ++ and -- operators end up pretty rarely used so it's no big loss.
This is one of the big points too, that often gets overlooked. The removal of ++ fits nicely with the removal of C-style for loops.
In my opinion the example given is awful code, yes we should protect developers from themselves but good taste in code is important and incrementing a counter easily should still be a thing. Just stop the bad cases then rather than the well understood and expected usage.
Nitpicking: assignment is an expression that returns () in Rust, meaning `foo(a = 1, a)` is valid if foo has a signature like `fn foo(x: (), y: i32)`. However, arguments are (mostly) defined to be evaluated from left to right.
If anything it would be time lost to accidentally typing ++x instead of x++. I always found the precedence to be pretty intuitive in C (unary ops generally bind tightest). The only time you'd have to Google is if you were doing something weird like `-(x++)`.
I'm also willing to accept that I'm old, stubborn, and unhappy about changing my ways and that this is the future... :)
See I always thought most people who wrote foo++ actually meant ++foo and it typically only came up when the operator precedence made the two not equivalent, causing a subtle bug.
I can't even think of a time where I really needed a use-then-increment operator where I wouldn't have just used the value, then incremented it on the next line anyway (maybe even with a post-increment out of habit).
int arr[5];
int i = 0;
while (shouldIKeepGoing() && i < 5) arr[i++] = getAThing();
If you use ++i there you'll skip 0 and overflow the array. This is what the postfix form is for, if you value terseness.
Edit: To clarify, I'm not necessarily saying you don't know that's what it's for, but this is a pattern I see pretty frequently in the wild so I'm just throwing it out there with an explanation of why people find value in it.
Interesting. I generally choose x++ for the majority of array-striding code I write, and usually as a shortcut to avoid having a bunch of x++ lines scattered around. I've found that you can structure your code either way and postfix seems to be easier to reason over for me.
I use ++x very infrequently, and mainly in hand-written text parsers.
I generally assume that ++x is idiomatic in c++, and x++ in c, because in c++ you may be calling against some non-primitive type, where the update-and-return-the-non-updated-value semantics of the postfix version are not optimisable away.
You should be hiding naked mutation/accumulation like that in some abstraction. ++ is a smell that alerts you to the fact that you should probably be thinking in a different way.
That implementation is wrong in multiple ways. First of all, it should return the pre-increment value. Second, it doesn't actually increment your variable (you need to use an `inout` parameter for that).
Perhaps this demonstrates perfectly why it's being removed!
Curious; have they made any statements on their versioning? I've seen mature projects years in development that are still 0 for major and in the teens for the minor version. Despite breaking changes Swift is now a 3.0 project?
I believe that they are using Semantic Versioning 2.0.0[1] as discussed in the Swift Package Manager Project's Website[2] under "Importing Dependencies":
> Given a version number MAJOR.MINOR.PATCH, increment the:
> MAJOR version when you make incompatible API changes,
> MINOR version when you add functionality in a backwards-compatible manner, and
> PATCH version when you make backwards-compatible bug fixes.
> Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
I've come to the conclusion that is because Apple can't release anything that may not look (on paper) as being "ready", since they built their image on attention to detail and polished products - so in other words, a Swift 0.x. would not have seem polished/professional enough for Apple to release into the wild, even if it had all the present functionality.
Still no way to export structs or enum-ADTs to Objective C code, though. Really makes it hard to write idiomatic Swift APIs that can be called from Objective C.
You can pass @convention(c) blocks around as C function pointers, but you can't currently call directly into a Swift library from C code. That makes it difficult to do things like expose platform-specific functionality to a shared C library using Swift.
Right now you have to implement the C function inside an Objective-C file, and then have your Objective-C code call your Swift code, or you have to change all your interfaces to allow injection of function pointers for all external dependencies.
It's the kind of thing that makes staying with Objective-C attractive for that type of stuff, but Swift in general has been a solid improvement, and I'd like to not have to give that up.