I like to think that how much time a language makes you spend designing the API is a spectrum, and Rust in particular is very much towards the extreme. There are a lot of languages out there with generics that do not place the cognitive load that Rust's borrow checker puts on you. I suggest you give those languages a try if you feel that 30 minutes is too long.
I must say I am biased, since I develop a language that I believe fits the sweet spot here. I would encourage you to give it a try: https://nim-lang.org
I don't consider myself a particularly good programmer, and I don't find the burrow checker is a substantial source of cognitive load.
I think I get way more friction from:
- library authors that go crazy with generics.
- the million ways rust makes performance tradeoffs explicit (even when you're writing part of the program where performance simply doesn't matter).
- the million-and-one things you can do with an option type.
- the arcane macro syntax.
I could probably go on. I've been writing a lot of rust because I wanted a low-ish level language with good C interop, and I didn't want to learn C++, so I wasn't at any point a rust fanboy. I very much appreciate not having to be a genius to avoid segmentation faults, but other than that, I think rust is a somewhat ugly language - for the simple reason that it has a lot of features, some of which partially overlap, and most of which are vaguely inconsistent.
I'm probably sounding a bit harsher than I mean to - obviously, you can't expect a language like rust to have the simplicity and consistency of something like lua, and by and large, it's better than anything else I write programs in.
Still, the point is, as somebody with a lot of (fair and unfair) complaints, the burrow checker is not one of them. It complains rarely, and when it does you're almost always doing something stupid, unless you're doing something fairly arcane, in which case, you should probably use unsafe.
Rust effectively forces you to code like you're writing a multi-threaded app even when you aren't. There's a reason why people suck at writing multi-threaded apps: because it's hard. This is what people are fighting when they fight the borrow checker. And this is why so many people find it frustrating. There's all sorts of designs that flat out don't work or are way more effort than they're worth.
This is a really common misconception, but all of the borrow checker's rules are necessary to fully verify the memory safety of single-threaded programs too.
It just turns out that mathematically proving GC-free code to be free of use-after-free and double-free errors is really difficult unless you disallow a lot of things.
it's not just about multi-threading, it's also about actually dealing with memory in a way that's safe in a single thread as well without introducing GC, which we know from C is hard too
In my experience most of the painful difficulties people have with the borrow checker is because it works like a read-write lock. AFAIK this aspect isn't required to have safe single threaded code without a GC.
How does this relate to whether or not you plan to mutate things? Wouldn't plain ole counting work for this, regardless of whether or not they're mutable, rather than what Rust does - allowing multiple readers and no writers, or zero readers and 1 writer?
Refcounting is GC; it automatically detects at runtime that an object has no references. It also adds a substantial cost that Rust is trying to avoid, often by copying immutable objects (which have better cache locality than random access to refcounts everywhere).
Oh, I get it, you're suggesting all references could be mutable, but still limited to the lifetime of the object. I guess the downside would be that you can't design an API like iterator invalidation that relies on "some (mutable) methods can't be called while any other (immutable) references exist".
It would mean that, once you obtain a derived pointer – say, if you start with a Vec<Foo>, a pointer to Foo that refers to the first element – you would have to throw away that pointer as soon as you made any function call whatsoever. After all, that function call might mutate the Vec<Foo> and cause the backing storage to be reallocated.
In practice, this is unworkable enough that it would basically forces you to reference count the Foo, so you could preserve pointers across calls by incrementing the reference count.
On the other hand, Rust's approach is suboptimal in cases where you're using reference counting anyway, or are willing to pay the cost of reference counting.
Yes, exactly. One of my first big fights with the borrow checker was over not realizing I was invalidating an iterator by mutating it by looping over it. Then it clicked for me.
Instead of a spectrum, I think of it more qualitatively. Rust says that lifetime is a fundamental, necessary part of an API's design. To call an API, you have to think about the ownership of what you send to it and what you receive.
C also makes you think about ownership in your API design, it's just the language doesn't give you any real tools to express or enforce it. C++ gives you a bunch of tools and options, but you have to hope that you and the API you want to use agreed on which subset of the tools to use.
Garbage collected languages specifically take memory management out of the API design by declaring that the runtime will take care of it for everyone.
If you want lifetime to be something an API can control, then I think Rust's approach makes sense even though it obviously adds complexity. If you don't, then, yes, removing it from the equation definitely lowers the API design burden.
In many ways its analogous to having language support for strings. In C/C++, you gotta hope that the library you're using has the same approach to strings that you want (std::string? char*, wchar_t?, something else?). In newer languages, it's just a given. (For better or worse: because then you end up stuck with UTF-16 in some languages.)
I've always said if you can write Ruby/Scala, you can probably write simple Rust with very similar levels of productivity after you get past the initial learning curve. But apparently there's a sizable population that thinks Ruby/Scala is hard/confusing/sizable cognitive load too.
I must say I am biased, since I develop a language that I believe fits the sweet spot here. I would encourage you to give it a try: https://nim-lang.org