Hacker News new | past | comments | ask | show | jobs | submit login
Apple’s Use of Swift in iOS 13 (timac.org)
174 points by Timac on Sept 26, 2019 | hide | past | favorite | 154 comments



On paper, Swift is my perfect language. It's like a slightly higher level rust.

For a while I was enamored of Haskell and its famed "if it compiles it works" saying, but I think I've realized that the features that contribute to that are: non-nullability and algebraic data types. Turns out I don't particularly like the extreme functional nature or (especially) the lazy aspect of it.

Rust is almost there, but then I'm dealing with lifetimes and working a little lower than I usually need. But I found I _love_ its exhaustiveness checking for enum variants.

The ML family is pretty much in my sweet spot, for some reason it doesn't really get the love that it seems it deserves, and last I heard Ocaml had an issue with being single threaded.

So that leaves Swift, with its non-nullability, algebraic data types, and exhaustive enum pattern matching. The only downside is its focus on Macs! I'm holding my breath, waiting for it to run just as smoothly on linux, and developing some developer cultural cachet and libraries for servers. I hope it does! I'm glad to see in this article that it's being used more and more at Apple.

And with SwiftUI, which looks pretty promising, I have to imagine it will only continue to grow in popularity.


Swift feels like a Frankenstein language though, there are so many keywords and new ones get added in every update. This article here [1] stack ranked a few languages by number of keywords and Swift is #4 on the list. And in Swift 2.1 they added some more, like properties wrappers and the `some` keyword. It gets hard to keep track of them all and the special behavior each one implies.

On the other hand, you can kind of make your own language within Swift because most of the keywords aren't required but allow some additive features. For example, some features help you write your code functionally and others support a declaritive style. It reminds me of what's going on with Babel and JavaScript, where babel extensions are like these language plugins you can mix and match to create your own language.

If you think of metaprogramming as a way to build your own language within a language, lisps and Ruby were kind of like pioneers of a build-your-own-language trend. I could see that as an interesting direction, I wonder if people have tried it before.

[1] https://medium.com/the-traveled-ios-developers-guide/swift-k...


The thing that makes this work in Swift is the concept of Progressive Disclosure. One of the main tenants of the language is that it should be possible to be productive with a small, high-level subset of the language, and then you can gradually start to utilize the more powerful and obscure parts of the language as you find needs for them

> you can kind of make your own language within Swift

This is one of my favorite aspects of working with Swift. Because it's a relatively un-opinionated language with a diverse toolkit, and because of the powerful type/protocol system, it can really feel like you can carve out a DSL for each specific use-case.


> The thing that makes this work in Swift is the concept of Progressive Disclosure. One of the main tenants of the language is that it should be possible to be productive with a small, high-level subset of the language

"Progressive Disclosure" is an interesting idea for PL research, but IME Swift doesn't do a very good job at it. There's a million cases I've seen where people trying to do simple things run into a pandora's box of complexity (e.g., the infamous "Protocol 'P' can only be used as a generic constraint because it has Self or associated type requirements"). There isn't really a "small subset" I've found that's usable for anything but the absolute simplest programs. Even the fundamental types like Int and String have a lot of not-entirely-hidden complexity in Swift.

> Because it's a relatively un-opinionated language

It seems awfully opinionated to me. There is a clear preferred way to write almost anything. Ask online about how to do things differently, and you'll be told to do it in a more "swifty" way.

> because of the powerful type/protocol system, it can really feel like you can carve out a DSL for each specific use-case

I'll have to see if the new features in the latest versions of Swift (used for SwiftUI) enable this more readily, but as of 4.2, anyway, writing DSLs in Swift is moderately limited and painful.


> Swift feels like a Frankenstein language though, there are so many keywords

That's a big reason why I don't like it. I also can't stand func, let, and var. Because func and var are totally useless, and let is the wrong word (should be const at least).

There is also all the question marks and other confusing syntax.


let is the right word given the historical background of functional programming languages.


nit, only mentioning it because I'm super curious about that list: searching for "4" and "rank" in the article doesn't show any relevant results to the ranking


ah sorry I pasted the wrong link, here it is: https://medium.com/@richardeng/how-to-measure-programming-la...


More accurately, it's tied for third place with C++.


That doesn’t mention PL/I, so it’s the competition for amateurs. https://www.cs.vu.nl/grammarware/browsable/os-pli-v2r3/:

    Summary
    Number of keywords: 310
PL/I made life ‘easier’ by not making keywords reserved, allowing you to use variable names that match the strings used to identify keywords, though, so you didn’t have to learn them all.


They’re certainly wrong about Scheme. They used MIT-Scheme to count keywords, when they should’ve used R5RS Scheme or newer. R5RS has 20 keywords + 4 if you add support for macros. With define-syntax you can define many special forms such as let, let* etc in a smaller language so may be the actual number is much smaller!


They don't say what version of Swift they're counting, but I count 96 keywords on the documentation page they link to, which would move it above C++ into 3rd place.


oh wow, smalltalk has only 6 keywords. that's really... small :).


> Rust is almost there, but then I'm dealing with lifetimes and working a little lower than I usually need. But I found I _love_ its exhaustiveness checking for enum variants.

Out of curiosity, when's the last time you worked with Rust? A lot of the lifetime's have been able to be elided in recent releases, especially in the 2018 edition. That's not to say that you don't still run into them, but in the middle of the road cases, it's much less frequent that you run into them unexpectedly.


Having written a lottt of both Rust and Swift there is something about Swift that makes it "flow" far easier than Rust.

I haven't been able to narrow down exactly what it is, but with Swift I can just put pencil to paper. With Rust I feel like I need to plan more, I dunno.

I do think my experience is colored by tooling, though


I briefly built a Swift app. I hated Xcode, but Swift was by far my favorite language I have ever worked with.

I'd love to see a Rails like framework come to Swift. I don't think we'll ever see that.


If you like the Swift language, you would also like Scala, and Scala has Play, which is Rails like. Also, in anticipation of that question, yes, Scala can be AOT compiled using Graal.

https://leverich.github.io/swiftislikescala/


There is one at least! www.vapor.codes, it’s really great.


And that unlike Rust Swift is often much slower if you aren't extremely careful how you use it


Swift has a different niche than Rust. They both offer safety, but Rust prioritizes performance, where Swift prioritized ergonomics.

For me personally, Swift is often fast enough for my needs as a compiled language, but it's so much easier to be productive in since it obviates many of the more tedious concerns of Rust.

Both languages play nicely with the C FFI so it's also possible to write high-level code in Swift and drop down into C or Rust for the performance sensitive parts.


There is also F#, open source and available in .net core on multiple platforms


I love F#, and the whole .net environment. It's very performant, (and C# isn't a bad language either for you OO fans). GC is ok by me.


Funny, the reason why I ended up not exploring Swift further, and instead trying out F#, was due to Swift's lack of tracing GC.


> I'm holding my breath, waiting for it to run just as smoothly on linux, and developing some developer cultural cachet and libraries for servers.

Apple Swift developers are actively working Linux support, but I suppose it'll forever remain a second-tier platform to iOS & macOS. Kind of like C# to Windows, but Swift doesn't have its Miguel de Icaza to create its Mono on Linux.

I'm sure the Swift team is well aware of this cultural bottleneck, hence the current efforts to support Linux, but it remains a cultural chicken-and-egg problem.


It's not just Apple working on Swift on Linux. IBM for example has done a ton of work. A Facebook engineer has contributed a HUGE amount of work to bring it up to speed on Windows (without WSL or Cygwin). And Google will end up contributing on all fronts if they continue to push it in the ML space.


> last I heard Ocaml had an issue with being single threaded.

OCaml is single-threaded, yes, but in practice that's not a big deal. There's a great concurrency library (Lwt) and the recently-added monadic (and applicative) bind syntax makes it look almost like sequential code. And if you really need a separate OS thread, there are ways to do that :-)

Last I checked, Swift still does concurrency mostly with Node-style callbacks.


Swift is my dream programming language as well, but I tried (2 years ago) reimplementing some python scripts in it and I had a hard time with things as simple as printing to the terminal (which involved distinct libraries and methods for mac and linux), and other things I don't remember very well at the moment.

But I do remember having to implement myself protocols for standard library types, most of them obviously belonged to the standard libraries, they even had theses protocols as examples in Apple's docs.

The language is truly great, but was not ready for primetime out of Cocoa apps.

Has anything changed?


Swift does run on Linux and Windows. Getting to usable is a function of community; waiting for one means you won't be a part of helping build it.



Did you tried scala? A quite mainstream functionnal programming language inspired by the ML family. I personally love it.


Scala is too big and complex. I tried to learn it, but ended up using it as a better Java with some functional features like pattern matching. I felt like it wasn't worth to continue mastering Scala after a certain point.


The funny thing is, those are not expensive or difficult things to implement in a compiler.


How about Scala?


It’s a really nice language. I like it a lot. But it has a dependency on a pretty hefty runtime. I understand that packaging nowadays is not really a problem, but still.


You should check out GraalVM. Easy compilation to native code. A total game changer for the JVM, imo.


JetBrains had a recent survey that showed half of all iOS developers were using Swift only.

https://www.jetbrains.com/lp/devecosystem-2019/swift-objc/

How about HN devs?


I do some contract work for a company I sold an app to awhile ago, and the codebase is mixed Objective-C/Swift. Anything new I do in Swift, but there's a few rare new view controllers and the like where Objective-C is used because it's just easier/faster to copy/paste infrastructure.

Some crypto code is also still Objective-C due to easier linking, but that'll probably change soon enough.

I don't consider ObjC to be a bad language in the slightest; in fact, I'll offer the (probably contrarian) opinion that it's one of the greatest languages of the past few decades.


Can you explain why you like Objective-C?

I've never used it, but have seen many code snippets in API documentations, and it just looks so...unnecessarily obfuscated and verbose, and with a syntax so wholly unique to Obj-C that my brain can't make sense of it like I would looking at, say, Java code from a C++ background.


All of the following:

- As malleable as JavaScript when you need it to be

- As typed as your favorite language when you need it to be

- Able to drop to much more arcane levels if you need to for certain performance-intensive places, without requiring some crazy setup (e.g, C/C++ are right there if you need them)

- ARC is IMO one of the best approaches to memory management out there

- The verbosity can be annoying at first, but it forces you to think hard about everything, and when I come back to Objective-C code years later, I've no issue remembering what the hell I was doing there

- People complain about brackets, but they just... don't matter. It's a syntax. You either deal with it or don't - you don't see me writing Lisp because I find the syntax annoying, which is fine.

- Message passing in ObjC is so optimized that it probably can't get much faster, and anyone acting like it's slow has a potentially skewed understanding of this

Swift is nicer for me in only two distinct ways:

- No more header files, because man was that annoying

- The stricter nil handling is overall better, if not a mental shift from some ObjC counterparts


> - Message passing in ObjC is so optimized that it probably can't get much faster, and anyone acting like it's slow has a potentially skewed understanding of this

The problem with Objc message passing is not the absolute execution time involved in passing a message. The problem is that the dynamism involved completely disables the compiler from performing any code-inlining. Typically inlining small functions is what enables the compiler to unlock further optimisations. From simple things such as merging duplicate loads from same address, to auto-vectorization.


Consider the following: the smoothest UI/UX platform of the past decade has been iOS, which was pretty much built on ObjC.

These types of speedups just aren't necessary in that world. You'd drop to C or C++ if you needed it, which is trading convenience for complexity.

You could want it to be faster, but it's not going to materially show itself in common workloads.


If you need this level of optimization you can always use pure C functions, which is exactly why Objective-C was created as a C extension.


If you want inlining don't use message passing. Just define your function as if you were writing C. Simple. Use message passing when you need the dynamism.


> - The verbosity can be annoying at first, but it forces you to think hard about everything, and when I come back to Objective-C code years later, I've no issue remembering what the hell I was doing there

This this this. I love how it forces the verbosity. It makes it so easy to understand.

> - People complain about brackets, but they just... don't matter. It's a syntax. You either deal with it or don't - you don't see me writing Lisp because I find the syntax annoying, which is fine.

I really don't mind the brackets. If you structure your code right, is not even that big of a deal.

> - Message passing in ObjC is so optimized that it probably can't get much faster, and anyone acting like it's slow has a potentially skewed understanding of this

Not just this, but I love how you can send messages to nil and it doesn't crash, or you can define a method in code and it runs.


Reference counting is the slowest way of doing automatic memory management, the only thing about it is being the easiest to implement.

And it shows, Swift gets slammed on the ixy paper by all tracing GC languages, spending an huge amount handling counters.

It makes sense for Swift though, because it is the easiest way to integrate with Objective-C, given the failure of its GC experience , having to deal with C semantics and non-GC enabled frameworks, that lead to constant crashes.


Objective-C was designed by adding objects to C. So the base syntax is C. Then the idea was that the object-oriented bits were inside [], and the named arguments or message syntax came from smalltalk and was designed to make the language easer to read. Named arguments are very verbose, but they make the code much more readable.

Then much later Apple added the . syntax, which while more familiar to a lot of modern developers kind of broke the cleanliness that the syntax used to have.


> in fact, I'll offer the (probably contrarian) opinion that it's one of the greatest languages of the past few decades.

I absolutely agree. Its such a beautiful easy to read and code language. Its got its quirks for sure, but I absolutely love it.


Pure Swift now.

It wasn't viable as v1, but now it's solid & clean.

Seems like it was a good opportunity to take a well understood language (C/C++/Obj-C) and after 30 years rebuild it ground-up into what it should be. Some constructs & workarounds just weren't going away without a clean-slate industry-wide fully-compatible restart.

Top problem now is getting developers to not force-unwrap optionals unless unless able to prove it won't cause a crash.


Similarly in Typescript, it seems like most of my code reviews consist of just telling people not to override null checking.

Adding a ! suppresses an error from the compiler, it doesn't fix an error in your code!


I occasionally use `!` as an escape hatch from the compiler's inability to reason about code correctness. Assert signatures in TS 3.7 will mitigate this issue significantly, but until then, I'm bangin'.


Yeah there are still many flaws in typescript's null checking, even in quite pedestrian code. The ! operator isn't evil, far from it, it exists so that the compiler can be more aggressive at checking nulls knowing that programmers can always override if it makes a mistake. I would never rewrite my code to avoid ! per se, I write it to be most readable to humans, if that doesn't please the compiler then so be it.


That's an interesting discussion, and one that will still keep happening.

At a given point, you can handle errors, but there's nothing you can do about them

It's good when languages (and developers) realize there are errors you should handle but there are errors you "shouldn't" because there's nothing you can do but bail out.


In almost all cases, there's a more appropriate way to indicate your expectations than using force-unwrap.

Even in those cases where you can't recover from a failure of those expectations, your code will be 1000x more legible if you demonstrate that you understand that the error could exist and what it might represent about the system as a whole. If you want to explicitly fire a fatal error after that because there's no other form of recovery, so be it.

Trying to capture all that in a "!" saves you a few LOC (or worse: minutes of reasoning) now but suggests somebody else might be pulling their hair out in frustration six months later. Please don't do that to someone.


"Nothing to do but bail" is very rare, orders of magnitude below the frequency of misused force-unwrap.


I save force unwraps for things that will be caught at develop/testing time, like app images that are supposed to be in the binary or controls that are supposed to be wired up in Interface Builder. Otherwise, for me, using a force unwrap is a code smell.


You fix that by banning force unwraps outside of something like unit tests with a linter. We do it at work and I can't remember it ever being an issue.


Great idea. Can you enforce it in PR checks?


Yes, at work we added swiftlint as part of our build pipeline, and we have it reject any usage of force unwraps.


Even when they can prove it, they shouldn't do it. It's not hard to conditionally unwrap optionals.


Force unwrapping optionals is quite useful: it's great syntactic sugar for what often just gets rewritten as guard/assertionFailure.


That's so rare in practice that you should just write it out fully. If it's happening frequently in your app, you should probably rethink your architecture. As time goes on, I've realized why people think asserts are code smells of their own.


Do you not use implicitly unwrapped optionals for IBOutlets?


If you can prove it won’t be nil, normal IBOutlet included, fine.


That's about the only place for them to be acceptable, tbh.


We don't use storyboards or nibs.


That’s fine IF they can prove it won’t crash or that a crash is appropriate (a la forced exit). Too many developers use it as a happy path shortcut, not considering the compiler is warning of possible serious problems.

Crashes & exits are not acceptable in production code, and I’ve had to fix too many of them.


I strongly disagree. I would rather an application crashed and provided me with an actionable crash report rather than it continuing on in some random state. (This isn't to say that you should ignore errors; it's just that if you don't think there should be an error at a certain location, I would really want to know about it if there is one.)


That's why force-unwrap is so bad: the compiler is telling you there's a risk, at a time you can do something sensible about it, and force-unwrap puts solving the problem off to the worst possible scenario. My production user base can't afford "actionable crash reporting" as a debugging tool vs compiler warnings; even 0.01% of users experiencing it would get very expensive.


If a thing can’t be null/nothing and there is nothing you can do it’s usually best to tear the world down. Otherwise in the best case you have a button that does nothing but in the worst case something bad happens. In the normal case the crash just moves. “Catching” programmer errors because a program that limps along looks better than one that crashes is a design decision I never understood.


If "a thing can't be null/nothing", then you've made an error in making it an optional.

If a thing might be null/nothing but can't be used when it is, then you should write code that indicates your understanding of that case. Catching an error doesn't imply that your program should proceed, it communicates that you the programmer understood your system and anticipated its failure states. Proceeding or aborting is a secondary decision.


> If "a thing can't be null/nothing", then you've made an error in making it an optional.

I don't disagree with this, but I can point out gobs of places in AppKit where Apple has arguably 'made an error' and I can't do anything about it. Or in third-party C libraries, which hold 99% of the functionality I ultimately need, and were not designed for having nice Swift programming interfaces.

I feel like everyone saying "never force unwrap!" must be writing pure-Swift standalone functions, with no dependencies, including the operating system. The API for writing items to the Mac clipboard still documents that it can throw exceptions, which are impossible to catch in Swift!


Exactly. And in many languages that is quite difficult to ensure at compile time. In java all object parameters are by definition optoinal. Same thing in C# (before C#8 brought non-nullable types).

Obviously in F# or Haskell or C#8 or Rust this isn't a problem. But what we are talking about is: how do you handle, at runtime, in a language such as C#7 or Java, an null parameter being passed to a method that must return a value based on its parameter, but was passed null? It's a choice between very bad options where the least bad is usually to throw an ArgumentNullException or similar. Because there is nothing better to do.


> If "a thing can't be null/nothing", then you've made an error in making it an optional.

There are cases where the programmer cannot express this to the compiler without a bottom type.


It's not just that it looks better. It can give the person using the software a chance to save their data.


That one should always do of course (but perhaps not to the default place because there is every chance the data is already corrupted)


Yes, and I'm usually happy the compiler is there to warn me. However, the compiler is not great at telling me why something might fail, which can make it difficult to provide decent error handling. And of course, there are certain places where the compiler just cannot know that a certain operation will not fail.


That's why you use it only when you can prove it won't fail. I'm ok with that, and said so at the start.


I've inherited an Swift app once that was written by the dev for whom it was the first Swift project. IUOs everywhere. And the biggest contributor to the crash count. Spend six months cleaning it up; the crash rate went down dramatically. It is not wise some value will always be there when you do not control its source (e.g. API).


> I've inherited an Swift app once that was written by the dev for whom it was the first Swift project. IUOs everywhere.

Right, hence why I'm suggesting that they can be useful if you don't put them everywhere and aren't using them as a band-aid to make your code compile :)


A guard statement requires you to actually handle the error, though. The exclamation mark will crash the app.


assertionFailure is not any more "handling the error" than a forcefully unwrapping the optional.


Yeah, it's also something you shouldn't do.


What should you do when URL(string:) returns nil?


Obviously it depends heavily on what you're doing in your app and where that string is coming from.

Is it a hard coded string that will never change? Force unwrapping is probably appropriate.

Is it based on user input? Then you should probably tell the user the string they entered isn't a URL.


Right, and in that case it's pretty clear how errors can occur and what you should do to handle it.


That's contextual. Maybe some combination of:

- abort and unwind the current action

- retry something

- log something

- abort the app explicitly

- fallback to a different value

- comment why this is a known-possible failure state

- present a message to the user

...


The one time you should unwrap it is if it's a constant - that way the app will always immediately crash if it's nil. Pretty difficult to not run into a bug like that.

But if you're creating it dynamically you should be guarding and throwing an error.


Guard with assertionFailure and explanation for why it was used is much easier to notice and understand than !.


You know, I used to think that but I've come to realize that I often just ended up writing messages that were not useful. For example, here is some code that I wrote a while back:

  guard let data = notification.userInfo?["updatedItem"] as? Data else {
      assertionFailure("Could not retrieve updated item")
      return false
  }
I am really being helpful here? If I see a crash on this line:

  let data = notification.userInfo?["updatedItem"] as! Data
I get essentially the same information, except it's done in a less verbose and easier-to-discover way. Whereas the first one is like adding this kind of useless comment:

  let x = 5 // Assign 5 to x


Two problems I see:

1. ! used here is hard to spot, it's just few pixels of difference from ?.

2. assertionFailure is not the same as force unwrap, because it crashes only in debug mode. Ideally you would log it with some analytics so developers know if problem occured, but doing nothing is usually better than crashing the app.


Why not use `if let` instead? Way safer


The issue I'm talking about is what to do when you don't have a good idea of what to put in the "else" branch.


A usrful error message?


This isn't always possible.


Why not? I always fold or map my option types. Sometimes I return None and caller handles it, sometimes I print a useful and informative error message. One should never ever directly unwrap an option value. In fact, some languages and libraries simply don’t allow it.


Objective C is good once its philosophy "clicks" with you, but any new development I would start in on Swift. Swift is a much nicer language, feels like compiled Python.

Same with Android, I would choose Kotlin before Java if it was my choice.

People still working with Objective C might be like I recently was: maintaining a sizable codebase which, unless Apple breaks something, porting to Swift is not justifiable to management.


You don’t think the swiftisms they added like trying to get rid of conventional looping constructs is odd?


There's nothing unconventional about for/in loops on Apple's platforms. Even in Objective-C, NSFastEnumeration has been the recommended default loop mechanism for a decade+ now.


Absolutely not. These are among the best features; in any language that has modern control flow, I rarely if ever use old-fashioned loops.


What are "old-fashioned loops"? Does that mean C-style?

As an also Lisp programmer, I find Swift's limited (and fixed!) set of control flow constructs downright ancient. They added for-each loops, but that's it. We had more "modern control flow" in the 1980's.


Diversity in control flow constructs isn't necessarily a good thing. In imperative languages you see goto being discouraged and removed. In functional languages you see things like call-with-current-continuation discouraged.


It is to me. The alternative is implementing them ad-hoc in every function. (Or perhaps waiting another few decades for Swift to add them to its compiler.) Having used many languages at different points on the power spectrum, I remain unconvinced that there's any advantage to omitting abstraction capabilities and forcing programmers to deal with it.

Besides, both of those examples are essentially non-local jumps, and I'm not sure I'd describe them as "modern".

In a sense, Swift already allows diversity in control flow constructs, via closures and the trailing closure syntax. It's just somewhat awkward, and not flexible enough to implement, say, most of Lisp's ITERATE library. That's packed full of exactly the kinds of control flow constructs that I have to write out by hand in Swift every day.


Do you mean the c style for loops? The were removed in favour of iterator style loops (for foo in bar { ... }). Or are there some others I haven't noticed?


It's actually more curious that they didn't have iterator style loops from the beginning. It's a pretty standard feature in newer languages and even plenty of older languages are adding support for it.


Swift had iterator-style loops from the beginning. It also initially supported C-style "for (i = 0; i < j; i++)" loops, but they were removed very early on (because in practice, they're rarely used for anything except iteration, and iterator-style loops are much more readable and less error-prone).


I never looked at Swift, but for example how do you wright 10 times "hello world" on the command line if you don't have for loops ?

You could do it with a while loop (if it exists in Swift) but I'm at a loss as to why you would remove a for loop from a programming language.


Iterator-style loops are much clearer and more concise in almost all cases. In this case you can iterate over a range:

    for i in 0..<10 {
        print(i)
    }


I don't. I much prefer a chain-able functional approach to multiple mutating loops.


You can always pass in an iterator tracker via the following, much cleaner!;

  for (value, index) in arr.enumerated() { // Loop body }


To be fair, no language is going to be perfect. I don't mind them, but I remember how odd enumerations and mappings felt at first in Python. IMO, overall, the pros of Swift vastly outnumber the cons, when the alternative is Objective C.


Thats kind of the thing to do in C# as well, and Java Streams, and in Rust the fastest way to do loops is not to do them but use iterators.


As a language itself, I love Swift. Optionals, the guard statement, protocols and protocol extensions standout as awesome productive features. However, (at least 1 year ago, I'm on React Native now), the tooling around Swift was still crappy. Expect longer compile times, occasional hair tearing issues, broken autocompletion and a flickering syntax highlighting. Overall though these issues aren't enough to make me want to go back to Objective C.


It's a testament to how crappy my code is, but I find myself having to give it type hints too often, and it'll give up on spitting out warnings if the file gets too large. There's a perverse thought. I can bypass warnings-as-errors by making my code bad enough ;)


This typically happens at the layers where interop isn’t so good. Like using really old obj-c stuff. It’s worth reporting this to the swift project on github.


I'm sure https://bugs.swift.org/ will appreciate it much more.


I started using Swift around 5 years ago, and I have barely written a line of Objective-C since. I like the philosophy of the language a lot, and I use it on other types of projects at well. I'm frustrated by the tooling and the cross-platform story. It's a great language to program in, but it can be painful to work with in terms of project fragility and dependency management.


Yeah, it looks like the Apple team has trouble keeping up with Linux distribution, and nobody seems to have picked up the slack.

By the way, what is painful in terms of project fragility?


I spend most of my time in Swift. All new features are developed in Swift. We've slowly been doing a conversion, since about Swift 3, and are about 95% of the way there.

We also have several legacy apps that are all ObjC, or 50% ObjC. There's not a ton of new feature development, but still updates.

If I had my druthers, I'd be Swift exclusive.


For personal projects Objective C (Java for Android). I like them more. For commercial projects, I guess, Swift/Kotlin is the way to go, as it's new hotness.


Same here.


Pure Swift for the new apps. We freak out when we see @ObjC or NSObject usage in our codebase. On a long enough timescale when Apple converts away from using Objective-C under the covers, our code will not need any modification.


> We freak out when we see @ObjC or NSObject usage in our codebase.

…how are you interacting with UIKit?!


You rarely have to add @objc annotations or use NSObject directly. You only need it for target/action, KVO, etc.

I rarely see that annotation or class name in my codebases too.


UIButton only? Maybe binding UITextField events?


Buttons, selector-based Notifications (though the block version is finally good, so we mostly just use that now), & gesture recognizers look like our biggest users of @objc. Basically all to support target/action.

A quick search through one of my apps shows that about 4% of functions are marked as @objc (and we're not using the old compatibility mode where more methods were implicitly @objc either).


@IBAction can replace @ObjC for all selectors. We put it a mandate in our code reviews. There is also NSObjectProtocol which forces (in most cases) devs to subclass NSObject.


Been using pure swift for quite some time now. There used to be some libs for which I'd have to make a bridging header, but lately there's fewer of those.


We're using Swift 5 on our iOS and macOS apps.


The reason why Swift isn't being used more is that the Build and Integration team at Apple forbids the use of Swift for any project that has downstream dependencies. This is because Swift doesn't support sending out its symbols, meaning any Swift project has to be fully built and compiled before beginning compilation of any of its dependencies. They're getting a little more lax on it now but we've wanted to use Swift for years and are totally unable.


Do the new XCFramework/module stability improvements help here?


I wish they showed percentages, or the total number of binaries. It doesn't really tell me much that 141 binary use Swift. I want to know what proportion of iOS uses swift.


From a very rough check, there are on the order of a couple thousand binaries in iOS.


I've waded into Swift recently. It's a surprisingly easy language to pick up, with really nice modern language features. I really hope more platforms support it.


Anyone here using Swift on the server? Perhaps something in production even?


Starting about three years ago, I taught myself to build sites with Kitura, and got pretty far with it. As someone who has been paying the bills entirely with PHP since 2007, I loved the strong typing and great tooling, and I was hoping to get in on the ground floor and establish myself as an expert so I could get scooped up by an early adopter, or (gasp) maybe even IBM themselves. But those opportunities never materialized, and I got discouraged about a year ago after IBM made the baffling decision to release a new version of their database library, Kuery, which made it unusable in any non-asynchronous way (barring silly hacks), which doesn't really make sense in terms of web development. Sure, you could still use other database libraries instead, but then you're out of the IBM ecosystem. It was a weird decision and one has to wonder if IBM was eating their own dog food.

At any rate, I'd love to get back into it one of these days, but for now, PHP projects have paying clients, so that's still what I focus on. I would also like to see Swift get more "official" attention outside of Ubuntu; specifically I want it on the BSDs, though of course other Linux distros deserve it too. I'll gladly use Ubuntu if a client pays me to, though.


I do, without good reason. I never programmed for iOS. I learned it for fun and now it's my favorite language. I use IBM Kitura for server-side development because of it's well designed API.

The ecosystem is indeed the greatest downside. A lot of the existing libraries are also polluted with UIKit code making them unusable on Linux. Just recently wanted to use an existing API client of some social network, but couldn't, because it contained UIKit code. Forking it and fleshing out the parts needed feels kinda wrong.


I was thinking about it, however I've found that String performance is horrible (at least with Japanese strings) if you do anything like hasSuffix() or hasPrefix(). I think the problem is that String converts to unicode code points as soon as it seems useful, instead of keeping everything in UTF-8 (or whatever, really) and doing memcmp for hasSuffix() and hasPrefix() specifically, which would work much faster. Using NSString everywhere is faster, but then string concatenation involves a bunch of casting.

I'm not much of a server guy, so I might be wrong here, but dealing with strings seems like something a server is likely to need to deal with. Although, maybe the strings are short enough that it's not much of a problem, or maybe strings that are almost always ASCII are fast. Anyway, something to keep in mind.


Were your performance problems before or after Swift 5's switch to using UTF-8 as the preferred encoding? See https://swift.org/blog/utf8-string/ . It sounds like before, so you may want to check out the String performance again.

Swift 5.0 and 5.1 each supposedly made some 10-20% gain in ARC performance compared to the previous version (4.2 and 5.0), so that may also help. I think Swift's performance can be very hit or miss but they (Apple) are making some good strides.


It was Xcode 10.2, I think. (10.something), I think that’s Xcode 5.

hasSuffix() doesn’t need UTF-8 to be performant, though. As long as both strings are both the same encoding, just go back len(suffix) bytes from the end of the string and memcmp to see if they are equal.


You can commit a sin and make all strings be NSString


Yes.

I’ve made a backend for my app that is essentially a middleware layer to an external API.

It’s entirely written in Swift, using the Vapor web framework, packaged up in a Docker container, and hosted on Google Cloud Run (which is essentially serverless for Docker containers).

https://cloud.google.com/run/

Previously it was running on AWS Elastic Beanstalk, but the devops knowledge to get it running on Beanstalk was significantly more, and realistically beyond my capabilities. Towards the end there was an intermittent SSL issue on Beanstalk I couldn’t figure out how to fix.

Since moving to Cloud Run its been running extremely smoothly and is costing a fraction of the price.

It is in production, but the daily active users are measured in the thousands, not tens of thousands.


Yes, but it can be a pain, and I wish more people were part of the ecosystem.


Very interested in this. Would like to try it out for an AI framework that combines knowledge graphs with word embeddings and expert systems, and although the Python ecosystem is the de facto standard (despite lack of safety and performance), and Rust seems theoretically the best, Swift seems to check all the boxes in terms of safety and expressiveness for me. Swift even has a relatively huge community, but it's so focussed on iOS stuff.

I know Chris Lattner did something with TensorFlow though.

Is anyone into this?


IIRC, Chris is leading the Swift for Tensorflow team at Google.

I run a startup that utilizes CoreML pretty heavily in mobile apps, and I'm eager to eventually build out a web app with the backend entirely in Swift. Being able to do everything in one language is really enticing.


I would not because it's slow to compile, doesn't have a good backend & dev tooling ecosystem due to being bound to apple for the most part.

Kotlin would probably be a better idea, and if you really need some sort of perf boost, use rust or c++.


The site is infinitely redirecting for me; https://web.archive.org/web/20190926183124/https://blog.tima... is the archive.org version.


Are you visiting this page https://blog.timac.org/2019/0926-state-of-swift-ios13/ ?

Which browser are you using? The site has been tested on a couple of platforms and different browsers. I can't reproduce such a redirecting problem.


Yep, same page.

Chrome 76.0.3809.132 (Official Build) (64-bit) (cohort: Stable), Windows 10.

That said, I guess I needed to update Chrome, so I just did to 77.0.3865.90 (Official Build) (64-bit) (cohort: Stable) , and same error.

Here's a "copy as cURL" for the request, maybe that can help?

curl 'https://blog.timac.org/2019/0926-state-of-swift-ios13/' -H 'authority: blog.timac.org' -H 'cache-control: max-age=0' -H 'upgrade-insecure-requests: 1' -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.90 Safari/537.36' -H 'sec-fetch-mode: navigate' -H 'sec-fetch-user: ?1' -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,/;q=0.8,application/signed-exchange;v=b3' -H 'sec-fetch-site: cross-site' -H 'accept-encoding: gzip, deflate, br' -H 'accept-language: en-US,en;q=0.9' --compressed


Do you think Objective C will be phased out some day?

(As in not usable for development in the Apple ecosystem)


Objective c probably will be phased out before C will. Objective-C libraries are themselves a wrapper over optimized C, which is what constitutes most of ios core.

I don’t think swift will phase out C until a very long time. It still needs to sort its performance issues related to memory automatic reference counting and global lock. Which requires something close to rust borrow checker, but with an even more advanced technology to keep the syntax beginner friendly.


Will start using Swift once they add C++ interop. Right now mixing Objective-C and C++ is easy. This is not the case for Swift.


Happy to see, it's a wonderfully designed language that's a joy to write. Super powerful enums, pattern matching, etc.


Is this why the Podcast app has become so buggy?


Podcasts still appears to be mostly Objective-C.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: