Right. This is one of the things I don't like about automatic reference counting, that it's almost a complete solution, but cycles require more thought (but less cases) to deal with than manual memory management itself.
It would be interesting to compare the cognitive load of the Swift way and the Rust way. Rust gives you better guarantees, but you must annotate your code.
Rust doesn't just give you better guarantees. It gives you no runtime checking, and potentially even better performance than naive C, because it can automatically tell LLVM when there is no memory aliasing going on.[0] Also, it gives you more confidence, because you can, for example, give out an array to someone and be sure that it won't be modified. And usually that kind of confidence is enough to write safe, race-free parallel code.
I don't think Swift's runtime exclusivity checks give you any of that.
Swift arrays have pass-by-value semantics (under the cover, they use copy-on-write to keep performance good) and are thus safe to pass to functions without concern that they'll be modified. (Unless it's passed as an `inout` parameter, which requires a prepended "&" at the call site.)
Rust can use its own bovine superpowers here https://doc.rust-lang.org/std/borrow/enum.Cow.html to enable developers to write code that's invariant over whether it's dealing with a fresh copy of something, or just a borrowed reference. So, it's not quite correct to say that Swift's use of copy-on-write gives it better performance. Rust can express the exact same pattern quite easily, and do it with far more generality.
I don't mean to imply that Swift's array performance is better than anything other than what its own performance would be if it didn't implement COW. (My intention was to convey that the normal way of passing arrays comes with confidence that your copy won't be modified by the callee.) Rust sounds nice, I'll have to try it sometime.
Are you sure that code expresses the same pattern? It looks like Rust's COW can only transition from Borrowed to Owned by making a copy. Swift's COW works by dynamically inspecting the reference count.
Dynamic linking isn't a problem. Rust's checking is local, and doesn't require checking things in other functions, just their signatures. Swift is the same.
There are some cases were dynamic checks are required (e.g. with reference counting/shared ownership: class in Swift, Rc/Arc in Rust). Swift will automatically insert checks in these cases, whereas they have to be manually written into Rust code (via tools like RefCell). Swift and the APIs inherited from Obj-C use a lot more reference counting that Rust does by default, plus it's more difficult for the programmer to reason about the checks (e.g. to be able to write code that avoids them, to optimise) when they're implicit.
In summary, Rust and Swift have essentially the same rules, but Rust requires manual code to break the default rules, whereas the Swift compiler will implicitly insert those necessary checks (in some cases).
That's the point: with dynamic linking the static signature encodes too much.
In Rust sometimes you need to switch from FnOnce to Fn, or from Rc to Arc. These changes are binary incompatible. You have to recompile every client.
Swift can't tolerate that restriction. UIKit doesn't want to have to pick from among Fn/FnOnce/etc for every function parameter, and commit to it forever.
Swift types and functions need to evolve independently from their clients, so static checks are insufficient. That's why you see more dynamic checks. If Rust had the same dynamic linking aspirations it would insert more dynamic checks as well.
No, Swift needs the static signature of the library for dynamic linking too, by default.
You are correct that Swift wants to be able to change signatures without recompiling clients ("resilience"), but this is very limited, especially for changes that would affect the `inout` checking (e.g. one cannot change an argument to be inout): https://github.com/apple/swift/blob/master/docs/LibraryEvolu... (note the list of non-permitted changes includes changing types).
This doesn't sound like a very good example. Fn inherits FnMut inherits FnOnce, so a library should just ask for the freest one it can support. Usually that just falls out of the meaning of the function. For example, a callback should only be called once, so you ask for an FnOnce, or maybe some function will be called in parallel, so you ask for an Fn. And with Rc, RefCell, Arc, etc. a dynamic approach is also pretty well-supported.
Apple's APIs last for years, in some cases decades. In this world, properties like "this object can only have one reference" and "this function can only be called once" become arbitrary constraints on the future.
Think about instances where you've refactored a RefCell to an Rc or a FnOnce to a FnMut, and consider what it would be like if you were unable to make that change because it would break the interface. It would be profoundly limiting.
Rust does not have a stable ABI at present, so practical use of dynamic linking would require falling back to some baseline ABI/FFI for everything that crosses shared-object boundaries (such as the C FFI), or lead to a requirement that a shared object must be compiled with the same compiler version and compile options as all of its users. This seems like it would be a severe pitfall.
I'm referring only to the thing being discussed in this post. To expand for precision: dynamic linking doesn't force the Swift compiler to insert more dynamic checks for exclusivity enforcement.
AIUI, Rust can do it "the Swift way" (a misnomer, since it was first) for any given object simply by adding a RefCell<…> or Mutex<…> type constructor, as appropriate. These types are commonly used in combination with reference-counting smart pointers, Rc<…> or Arc<…>. (The smart pointers focus on making the "multiple ownership" aspect work, but this entails that these types have to return shared references, which are ordinarily read-only. The RefCell or Mutex type constructors then focus on enabling "runtime-checked exclusive access" on top of the native, shared reference types).
The problem with RefCell/Mutex is granularity. If you wrap the entire object in one of them, the mutable borrow you need to write a field blocks anyone else from accessing the object at all, even to read unrelated fields. Note that Swift 5 exclusivity enforcement is imposing exactly this limitation on Swift "structs", but not its heap-allocated "classes".
In Rust, an alternative is to wrap individual fields in RefCell/Mutex, but that results in uglier syntax – you end up writing RefCell/Mutex and .borrow()/.borrow_mut() a lot of times – and adds overhead, especially in the Mutex case (since each Mutex has an associated heap allocation). There are alternatives, like Cell and Atomic*, that avoid the overhead, but have worse usability problems. I've long thought Rust has room for improvement here...
You can get rid of the heap allocation for Mutex by using parking_lot, and there is work to merge this into standard library. The rest of what you wrote sounds right to me.
The cognitive load is higher on Rust, not only do we have to annotate the code, to have the same semantics as Swift we have to write Rc<RefCell<T>> everywhere, which isn't that ergonomic.
On other side, as discussed recently at CCC regarding writing drivers in safe languages, Swift's GC as reference counting generates even less performant code than Go or .NET tracing GC, so there is also room to improvement there.
I always assumed that RC has much better worst-case latency than tracing. Given that Swift is a language whose primary target is user interfaces, doesn't it make sense to optimize for that?
RC is the easiest GC algorithm to implement, that is all.
It is quite bad for shared data, because handling lock count requires a lock and also trashes the cache, both bad ideas in today's modern hardware architectures.
Also contrary to common belief, it also has stop-the-world issues, because releasing a graph based data structure can originate a cascade of count == 0, thus having a similar behaviour.
Which when coupled with a naive implementation of destructors can even cause stack overflows, due to the nested calls of the data being released.
So when you start adding optimizations for dealing with delayed destruction, non recursive destruction calls, lock free counting, a cycle collector, you end up with a machinery similar to a tracing GC anyway.
Finally what many seem to forget, just because a language has a tracing GC, it doesn't mean that every single memory allocation has to go through the GC.
When a programming language additionally offers the support for value types, stack allocation, global segment static allocation and manual allocation in unsafe code, it is possible to enjoy the productivity of a tracing GC, while having the tooling to optimize the memory usage and responsiveness when required to do so.
Having said all of this, RC makes sense in Swift because it simplifies interoperability with the Objective-C runtime. If Swift had a tracing GC, they would need a similar optimization like .NET has to deal with COM (see Runtime Callable Wrapper).
It depends on your domain. Any data structure that has links in both directions becomes a pain, and it's pretty common to have trees like that. It's usually solved by having parents own the children but not vice versa, but then you run into cases where you want the whole thing to live so long as someone holds an owning reference to a child.
Many people in the Rust community use some sort of ECS in cases where something like a GC is needed. The whole point of ECS is that they work a lot like an ordinary GC runtime or an in-memory database, the underlying representation ensures that the system knows about all the "entities" that it needs to, and can reliably trace linkages between them. It might be easier to just use Go in cases where tracing GC is needed, though.
In a language with tracing GC, you'd use a strong reference in both directions, so anybody who holds the child can always walk their way to the root (if you expose those references). Without such GC, you basically have to pass the root around, because it's what's holding all those children alive.
What's not solved with manual memory management? The point is not that you can't solve it, the point is that reference counting isn't the whole solution for garbage collection unlike a tracing gc for instance.
You can add a cycle collector on top of a reference-counted language. It's been discussed for Swift, but there are downsides which might make it not worth adding.