As a Python programmer with limited experience with compiled languages, Rust code was more intimidating to read or look at than C++, Java or Go. After only an hour, I am overwhelmed by the sheer beauty and mature design of this language - it almost reads like Python or as well as any compiled language can. I cannot believe that I am smitten by Rust within an hour. Its features seem, obvious. My experience with Go was frustrating since I felt like I had to give up elegance for practicality. I did not expect to learn 1-2% of Rust when I woke up today. My thanks to the author.
I'm curious what other what other developers who primarily use Python think of this article and Rust in general?
I've shipped relatively lots of code in Python and a little bit in Rust.
For any project where performance or correctness are significant concerns I'd much rather use Rust than Python. It provides a lot more tools to help you write correct code, and express invariants in a machine-checkable way. This means that they are maintained when the code changes, even when multiple contributors are involved. I'm must more confident in my ability to ship correct code in Rust and maintain that correctness over time.
Rust also doesn't have the performance ceiling (floor?) of Python, either for single threaded programs, or — especially — for those that benefit from concurrency or parallelism. Of course for some progames there's a clear hot loop that you could implement as a native extension in Python, but other workloads have relatively flat profiles where the performance bottlenecks are making allocations or similar.
As well as the actual language features, the commitment to backward compatibility from the Rust authors means that you don't get regular breakage simply from updating to a newer version of the language. I think Python 2 being 2.7 for such a long time gave people a false impression of how stable a foundation Python provides and the fact that point releases of 3.x often cause problems feels like it causes a lot of unnecessary makework for Pyton users and is rather offputting.
That doesn't mean I'd always choose Rust of course. You do require more upfront design to get something that works at all. Some projects also benefit greatly from not requiring a compile step and making use of the ubiquity of a Python interpreter. And I think I'd find it difficult to justify using Rust for a typical web backend that's mostly composing together various well-tested libraries to provide an API on top of a database.
I don't think the parent is talking about Python 2 -> 3. I think they are pointing out that for many developers, moving to Python 3 happened very late. That means they ran 2.7 since 2010 (roughly 10 years now.) Now that they are on 3.x, they get issues with each new 3.x release.
The argument is that being on 2.7 for so long gave a false impression of the language being more stable than it is - developers just weren't using new versions.
Well to be fair the 2020 end of life date was announced almost a decade ago. That said, I know that fortune 500 companies that used 2.7 to develop new code in 2017 -- which was a mistake -- mostly because they were tied to old version of RHEL and they were too lazy to try to get Python 3 to work on an old version of RHEL.
So 2020 rolled around and the pypi libraries they were using were no longer supported even though RHEL would support python 2.7. So yeah, it was a really bad experience for everyone involved.
My housemate was just rolling back from python 3.8 to 3.7 yesterday due to a backwards incompatible change breaking a library... it's not just 2 -> 3 that makes python relatively unstable compared to rust.
Epoch's don't break libraries - that's the whole point.
They manage this by allowing using a different epoch in a library than the application that uses it, and defaulting to the original epoch if you don't ask to use a new one.
The vast majority of rust 1.0 libraries should still work with modern rust programs, I vaguely recall that there was one minor backwards incompatible change as of a result of memory safety bug (an exception to the guarantee), but I can't remember what it was and I can't find it with google so it might actually be all rust 1.0 libraries still work.
It's specifically complaining about point releases of 3.x, it only mentions 2 in an unrelated way.
The fix for https://bugs.python.org/issue40870 caused me some grief when upgrading from 3.8.3 to 3.8.4 (though to be fair, the ast module has different stability guarantees).
Some industries are very very slow when it comes to upgrades. E.g. the VFX reference platform[0] only adopted python 3 for the 2020 baseline, which means many recently released products are still embedding python 2.x [1].
> For any project where performance or correctness are significant concerns I'd much rather use Rust than Python. It provides a lot more tools to help you write correct code, and express invariants in a machine-checkable way. This means that they are maintained when the code changes, even when multiple contributors are involved. I'm must more confident in my ability to ship correct code in Rust and maintain that correctness over time.
Ada/Spark is more about mathematically proving that your program works. Rust is more about that your program won't trample memory and won't crash.
That said, Rust seems to have reasonably good performance compared to C/C++. Also the concept of lifetimes is interesting too.
Just because your program compiles in C++/Ada(no-spark)/C/Java/Rust doesn't mean it's correct. If it compiles in Rust, the compiler is telling you it's memory safe and thread safe.
If you have a lot of string handling the strictness and ease of safe zero copy in Rust would make it unbeatable for performance and safety.
Due to lifetimes it's easy to keep string slices as references into the original memory and if you do need to modify some strings, you can use CoW.
The way Rust deals with encoding also makes it much safer to use Rust. Since you need to be very explicit about which encoding you convert to what and if you want to allow lossy conversions.
I don't think there is much of an equivalent in ADA for this. ADA has other benefits, like the delta (fixed point) types which help a lot in embedded projects.
I was mostly a Python user, and now I primarily write Rust when I can. It's an absolutely gorgeous language, especially for being so practical.
There's a bit of a progression with Rust, where at first you see the examples and you go "this language is so beautiful!" Then you try to write something nontrivial in Rust and you go "wow the compiler will NOT stop yelling at me, how does anyone write anything in this language?" Rust has a fairly austere learning curve in general, and it takes some getting used to.
That passes, though, and then you really do get to experience the elegance and power which sold you on Rust in the first place. There's a period where writing Rust feels substantially slower that writing in anything else, but that passes too.
It really is as good as it sounds, but it might take a while before you're accustomed enough to Rust's paradigm that it feels that way.
am overwhelmed by the sheer beauty and mature design of this language - it almost reads like Python or as well as any compiled language can.
All those lifetime annotations are not sheer beauty or pretty, sure they are necessary but not nice to look at if you are comparing it to a higher level language.
I think that being able to write down the lifetimes in the language is beautiful.
You don't have to do it, but if you want to do it, being able to do so in a way that's verified and enforced by the toolchain beats doing so in a documentation comment.
Now every time I read a doc string saying that I need to "deepcopy" something in Python for some API usage pattern to work properly I cringe.
That's one of the things that get me about Rust discourse: it seems that "Rust is pain in exchange for performance" is a common misconception. Rust is discipline in exchange for performance and correctness. A GC lets you be relatively worry-free as far as memory leaks go (and even then..) but it doesn't prevent a lot of the correctness problems the borrow checker would.
With a checker you're forced to think: do I really want to pass a copy/clone of this? Or do I want to let that function borrow it? Or borrow it mutably?
While I get the point about data-races, a “GC” doesn’t make/help you leak memory - you have simply postponed the free to a later point in time.
Assume you had some code that takes a file name and calls open on it. One day you decide you want to print that filename before you open it. Naive code will cause the name to “move” to print and unusable to the open in next line. Even though it is perfectly understood by all parties that there is no threading involved and print would finish before the next use of that string. Yes, I can create a borrow or clone, but having to think of it every single line of code even when there is only one thread of execution is really painful
Edit: I get print is a macro, but imagine a detailed logger for this case.
I'd argue the exact opposite. Languages that don't have a concept of ownership, and a borrow checker, and don't explicitly say if they want ownership, a reference, or a mutable reference, force you to keep all of these details in your head.
Here, if I have a `&T` and I try to call a function that has a `&mut T`, the compiler will tell me that's not gonna work - and then I can pick whether I want my function to take a `&mut T`, or if I want to make a clone and modify that, etc.
There's a learning curve, it's a set of habits to adopt, but once you embrace it it's really hard to go back to languages that don't have it! (See the rest of the comments for testimonials)
I think both points are right. There's times when it's useful and desirable to be specific about lifetimes, and there's also times where it's annoying noise.
Bignum arithmetic is an example of the latter. You want to just work with numbers, and in Python you can, but in Rust you must clutter your code with lifetimes and borrows and clones.
Swift's plan to allow gradual, opt-in lifetime annotations seems really interesting, if it works.
Bignum arithmetic actually is pretty simple in Rust if you use the right library. Rug[0] makes using bignums look almost just like using native numbers through operator overloading.
The operator overloading is nice but you still get smacked in the face right away by the borrow checker. Simple stuff like this won't compile ("use of moved value"):
This would work if `Integer` would implement the `Copy` trait. But I guess that type is heavy enough for it to be too expensive, therefore forcing you to explicitly call `clone()`.
I don't see how a GC would do anything with memory leaks. When you have a data structure containing some elements, and you keep a reference to the structure, because you need some data from it, but you let it grow, then you have a leak, GC or no GC. If anything, GC encourages memory leaks by making freeing memory implicit, something a programmer normally doesn't think about. And yet a resize/push operation on a vector (sic!) is just another malloc. One that will get ignored by a GC.
GC protects you against double-free and use-after-free, but memory leaks? Nope.
This is a memory leak in a non-GC language, but not in a GC language.
In a practical sense... is it your personal experience that memory leaks are equally prevalent in GC and non-GC languages? I've spent decades working in each (primarily C++ and Java, but also Pascal, C, C#, Smalltalk...) and my experience is that memory leaks were a _much_ bigger issue, in practice, in the non-GC languages.
This is a memory leak in a GC language as well, depending on what happens to foo in ...code that uses foo... Maybe it will become a field of some class, maybe it will get pushed to some queue, maybe it will get captured in a lambda, etc. You won't even be able to tell just from looking at this code in this one place alone, if you passed this pointer as argument to a function.
It is my experience that when people work with GC languages, they treat the GC as a blackbox (which it is) and simply won't bother investigating: do they have memory leaks? Of course not, they are using a GC after all, all memory-related problems solved, right? Right... With code that relies on free(), I can use a debugger and check which free() calls are hit for which pointers. Even better, I may use an arena allocator where appropriate and don't bother with free() at all. With a GC I'm just looking at some vague heap graph. Am I leaking memory? Who knows... "Do those numbers look right to you?"
Memory management issues are usually symptoms of architectural issues. A GC won't fix your architecture, but it will make your memory management issues less visible.
It is my experience that most memory problems in C come from out-of-bounds writes (which includes a lot more than just array access), not from anything related to free(). A GC doesn't help here.
> In computer science, a memory leak is a type of resource leak that occurs when a computer program incorrectly manages memory allocations in a way that memory which is no longer needed is not released.
In most of your scenarios, e.g. "pushed to some queue", the object in question is still needed. Hence, this is not a leak. Presumably, the entry will eventually be removed from the queue, and GC will then reclaim the object.
At any rate, I think we're moving past the point of productive discussion. My experience in practice is that memory leaks are more common / harder to avoid in non-GC languages. Of course a true memory leak (per the definition above) is possible in a GC language, but I just don't see it much in practice. Perhaps your experience is different.
What does memory leak free mean? It has a semantic meaning.
If I have a queue with elements and I won’t be accessing some of it in the next part of the program, is it a leak?
Also, remember that certain GCd languages intern strings, is it a memory leak since it will not necessarily use it anymore?
Well, you would need a way to "tell" the compiler more about your intentions and maybe even explicitly declare rules for what you consider a memory leak in certain scenarios.
To provide an example for a different case: programs written in Gallina have the weak normalization property, implying that they always terminate.
I have at times in the past made the (somewhat joking) observation that "memory leak" and "unintentional liveness" look very similar from afar, but are really quite different beasts.
With a "proper" GC engine, anything that is no longer able to be referenced can be safely collected. Barring bugs in the GC, none of those can leak. But, you can unintentionally keep references to things for a lot longer (possibly unlimited longer) than you need to. Which looks like a memory leak, but is actually unintentional liveness.
And to prove the difference between "cannot be referenced" (a property that can in principle be checked by consulting a snapshot of RAM at an instant in time) and "will not be referenced" (a larger set, we will never reference things that cannot be referenced, but we may not reference things that are reachable, depending on code) feels like it is requiring solving the halting problem.
And for free()-related problems, I've definitely seen code crash with use-after-free (and double-free).
> I have at times in the past made the (somewhat joking) observation that "memory leak" and "unintentional liveness" look very similar from afar, but are really quite different beasts.
Both situations prevent you from reusing memory previously used by other objects, which isn't being utilized for anything useful at that point. The distinction is valid formally, but from a practical point of view sounds rather academic.
> And to prove the difference between "cannot be referenced" (a property that can in principle be checked by consulting a snapshot of RAM at an instant in time) and "will not be referenced" (a larger set, we will never reference things that cannot be referenced, but we may not reference things that are reachable, depending on code) feels like it is requiring solving the halting problem.
As usual, looking for a general solution to such a problem is probably a fool's errand. It's much easier to write code simple enough that it's obvious where things are referenced. Rust's lifetime semantics help with that (if your code isn't all that simple, it will be apparent in the overload of punctuation). If not Rust, then at least it would be good if you could check liveness of an object in a debugger. In C you can check whether a particular allocation was undone by a free() or equivalent. I'm not aware of any debugger for e.g. Java which would let me point at a variable and ask it to notify me when it's garbage collected, but it sounds like something that shouldn't be too hard to do, if the debugger is integrated with the compiler.
You can write Rust code with almost no lifetine annotations. That's heavily dependent on the domain, of course. But for relatively simple function signatures they are usually inferred correctly, and you can often just clone instead of requiring references.
Pretty sure if you’re deserializing a structure with borrowed references you need lifetime annotations. Copying things around to pacify the compiler hardly seems elegant.
It’s not just performance, but you now have to go back and update every instance of that struct to reflect the change in ownership of that member if you can even change the member in the first place (e.g., you can’t change the type of a member of the struct itself is defined in a third-party package). Moreover, some instances of your struct might be in a tight loop and others might not, but now you’re committing to poorer performance in all cases. Maybe you can box it (although IIRC I’ve run into other issues when I tried this) but in any case this isn’t the “elegance” I was promised. Which is fine, because Rust is a living, rapidly-improving language, but let’s not pretend that there is an elegant solution today.
Libraries can be (and some actually are) designed with that in mind, with COW (copy on write( types that can be either borrowed or owned. Speaking of performance, kstring (of liquid) goes one step further with inline variants for short strings.
I want to say that seems like a valid concern but in practice I've seen it come up only rarely.
Most languages just clone all day long, it's not that bad, rust clones (like most languages) are just to the first reference counted pointer after all.
Eg cloning a string leads to an extra allocation and a memcopy.
If you want to get a similar performance profile to GC languages, you have to stick your types behind a `Rc<T>>/Arc<T>` or `Rc<RefCell<T>> / Arc<Mutex<T>>` if you need mutability.
But modern allocators hold up pretty well to a GC, which amortizes the allocations. The extra memcopying can be less detrimental than one might think.
“Yes and no” is too generous; most languages clone only very infrequently (basically only for primitives where there is no distinction between a deep and shallow copy). For complex types (objects and maps and lists) they pass references (sometimes “fat” references, but nevertheless, not a clone).
in JavaScript, one is just using the variable name as a "holder" of some value. One doesn't have to designate that that variable is being passed by reference. If one wanted to actually copy that object, they'd have to devise a mechanism to do so. In Rust, if someone doesn't specify, using the & symbol, that something is a reference, it'll end up moving the value.
Basically all I was saying is one can not approach writing Rust with a Java/JavaScript mindset. (That a variable is just a bucket holding a value). Care needs to be taken when referencing a variable as it may need to be moved, copied/cloned or referenced. In the case of copying, another memory allocation is done. So if someone approaches Rust from the standpoint of "this word represents a value, and I'm going to use it all over the place", they can find themselves blindly allocating memory.
Oh, by “reuse variable names”, you simply mean that values aren’t moved, i.e., that JS lacks move semantics or an affine type system. That’s a bit different to “reusing variable names”, which you can do in Rust as well provided the variable holds a (certain kind of?) reference.
I agree it's not pretty. It took me several passes to just understand what was going on there. But ownership is something I've never come across before and it doesn't exist in Python (not to an average user anyway), so it seems a little unfair to compare Rust and Python based on that feature.
However when you take a high level feature like overriding operators which can be done elegantly in Python, for a complied language, Rust's way is quite concise, readable and to my eyes quite pretty.
Just started listenting to it. It's exactly what I was looking for, thank you. Armin Ronacher's opinion on such things would hold a lot weight in my head.
Also reading Python code is easier (less syntax noise) on the eyes. The article above is very beginner Rust, and I won't rely on that to look at actual Rust code in the wild.
If you want to take a look at what actual Rust code in the wild, take for example, a web server Actix, and try to figure out what the documentation says.
I find that advanced Python is quite hard to read. Well, I find that advanced dynamically typed languages tend to be hard to read in general, because you can never be sure what type your inputs and outputs are and what values they can take. Usually my
At least idiomatic Python shuns wildcard imports that obfuscate where symbols are coming from (that drove me insane in Ruby).
On the other hand Rust code tends to be very strict about what your types are (even more so than a language like C++ where metaprogramming is duck-typed by default). It can lead to complicated code, but baring some weird operator overloading decision you should know exactly what calls what from whom when you look at any method or function.
Rust code can be tricky to write at times, but I generally find it a pleasure to read. Sure, ultra-complicated generic code can be overwhelming, but this complexity would be here in one form or the other regardless of the language, Rust just forces you to be explicit about it.
> Rust makes hard things easy and easy things hard.
I don't agree with this, and, if anything, this reads very biased.
Insofar, Rust has made my life a lot easier, and I have not run into any major issues aside the borrow checker. And this was early on. Two years now playing with the language and I barely run into it anymore.
So does Python. Asyncio is mediocre at best and difficult to express certain patterns in. Other libraries are far better, but not in the standard library.
Agreed it could be a little more ergonomic, but it wasn't a blocker. I was able to figure it all out in a weekend, and write plenty of services with it.
"Rust makes hard things easy and easy things hard."
Disagree here. Like article Rust 2018 is very nice and ergonomic. Iterators, Lifetime Elision etc are nice to work.
Regarding noise many people consider those noise but I find noises like return types etc very useful. Because I can be sure the return type. Regarding actual code actix doesn't look that bad from examples also.
I use warp and its pleasant to work. The only thing I hate is compilation time other than that I don't think I have any major criticism against rust. But compilation time is near to cpp etc so ...
I'm a recent Rust fan (via AoC), but for "easy things hard", I would offer input/string processing as an example— regexes, string operations, grammar, etc. I know that Rust is forcing me to be correct and handle (or explicitly acknowledge that I'm not handling) my error cases, but from the point of view of just wanting to get the happy path working, it's a lot more noise to deal with compared with what it would look like in Python, eg:
let re = Regex::new("(?P<min>[0-9]+)-(?P<max>[0-9]+) (?P<letter>[a-z]): (?P<password>[a-z]*)").unwrap();
re.captures_iter(&contents).map(|caps|
Input {
password: caps.name("password").unwrap().as_str().to_string(),
letter: caps.name("letter").unwrap().as_str().chars().next().unwrap(),
min: caps.name("min").unwrap().as_str().parse().unwrap(),
max: caps.name("max").unwrap().as_str().parse().unwrap()
}
).collect()
If you get rid of unwrap and use `if let` and `match`, this code will clean up nicely.
If the code is supposed to be maintainable, you could also make something like a FromRegex trait for Input. It would be a good idea to try exercism's mentoring thing to get better at writing it the easier way the first time. The mentoring thing really is what helped me to think in ways that made this mess easier to avoid.
Fair fair. To be honest, I think in a lot of these cases there are good helper crates/macros; for this I would now probably use recap: https://docs.rs/recap/0.1.1/recap/
The really horrendous scenario was when I was trying to navigate pest iterators for parsing according to a grammar.
Interesting! Can you help me understand why this doesn't work for me?
error[E0599]: no method named `parse` found for enum `Option<regex::Match<'_>>` in the current scope
--> src\main.rs:24:39
|
24 | min: caps.name("min").parse().unwrap(),
| ^^^^^ method not found in `Option<regex::Match<'_>>`
I think you need .name("min")?.as_str() to access the underlying text of the match object (after making sure it is a valid match with the ?), which can then be parsed. The regex Match object itself does not have a parse method that I can see.
I don't know what the structure Input looks like but I played around with your code and it seems to work with as_str()
> I cannot believe that I am smitten by Rust within an hour
This has been my, and a lot of my colleagues', experience with Rust. Never have I seen people fall in love with a programming language so strongly. And the love lasts for a long time.
One way to read your comment, which I think the way others are reading it, is “you only like rust because you haven’t used it much, try it more and you won’t like it anymore.” The implication being that only people who haven’t used Rust can like it.
That may not be what you meant but it seems like that’s what people are understanding.
An honor to have Steve Klabnik himself responded to me. Thank you for your amazing work for the Rust community and the books as well. I'm a Rust hobbyist, I'm just trying to be objective, especially since majority of my day job doesn't involve Rust, and for sure my team mates refuse for me to introduce small bits of Rust code here and there, because of unknown territory.
They didn't, though? They indicated that they had a similar experience of falling in love with it quickly. That says nothing about when that experience happened.
I know it’s pedantic but I despise the semicolons in Rust. They make the language ugly and constantly burden the programmer unnecessarily. It’s a bizarre choice for such a modern language.
I find my own JS code easier to read since I dropped them, and my tired eyeballs no longer need to lex the ";" pattern. Yet some JS masters (Mike Bostock) include them religiously.
Semi-colons do make make possible multiple statements on a single line -- but surely there's a stronger rationale than that.
Python allows semicolons for separating multiple statements on a single line, even though it doesn't require them otherwise. (Those statements can't have blocks, because of the indentation-based syntax, but that's a separate issue.)
Some syntax is close to Python so some mental load is minimized, i.e use of snake_case, self, None and type annotation syntax, dict unpacking.
Some lines read like sentences i.e. for loops, single line if statements, impl Foo for Bar etc. Some design choices also read better I think, for example, let reads better than var.
Of course such preferences are personal. I've read other comments that mention that the tutorial covers only basic Rust features and it can get much more complicated when using more advanced features.
"As a Python programmer with limited experience with compiled languages"
In other words, someone who doesn't have a CS background, never took theory of programming languages, etc. "Python programmer" is like the 2020's equivalent of a "HTML developer" from the 2000s.
I used to use Ruby for small (100-500 loc) scripts to parse various CSV/JSON documents. You craft a couple perfect stanzas of immaculate code to do what you need and you feel great. Then you run it and turns out you forgot an `end` somewhere. Fuck. You run it again, and the name of the hash map variable is unknown, because you changed it at the last moment to make it more descriptive. Fuck. There, fixed everything. Run it again. Running for a couple seconds now with no errors -- pop the champagne! Wait, what's that? Row #998 of 1000 had an unexpected type of value? Fuck. Fix it, ship it. Wheel, snipe, and celly.
At some point I just started writing these things in Rust instead. Describe what I want deserialized as a struct. List the fields I want as enum variants. What's that, rustc, the "name" field is optional? Ok, that makes sense, I'll handle this right away. Done. Hey, rustc, how come people talk about fighting you all the time when you're actually the world's greatest pair programmer?
As another Python programmer I also agree. Rust's features are very mature especially compared to an old dinosaur like C. I only wish its syntax was more like Swift's which is more pleasing to the eyes and was also made by the same guy.
Minor note,. Graydon created rust and worked on Swift. He didn't create Swift however.
That said...both languages read like close siblings to me to the point I find it annoying switching between the two because I keep trying to use each ones language semantics in the other subconsciously.
> There is a special lifetime, named 'static, which is valid for the entire program's lifetime.
I've heard this many times before, but as the guide[1] says:
"You might encounter it in two situations... Both are related but subtly different and this is a common source for confusion when learning Rust."
'static means different things for references and trait objects.
ie.
'a: 'static --> reference with lifetime 'a lives forever and can never be dropped.
T: 'static --> Type T has no references in it; it is effectively 'entirely owned' by the owner of an instance of T.
These are totally different things with the same name.
(I believe technically there are actually three; `<T: 'static >` is a type constraint and `T + 'static` is a trait bound, but that both mean the same thing as far as I know)
The missing information here is that lifetimes don't apply at all to types that don't contain any borrowed references (such as `i32`, or self-contained `String`).
So `T: 'static` doesn't require types to live for the entire duration of the program. It requires borrows if there are any to be valid for that long. If there are no borrows involved, then 'static is ignored.
In practice `T: 'static` should be understood as "all temporary references are forbidden here".
`T: 'a` means that you may keep values of type T around for lifetime 'a, but not that they will. So `T: 'static` means that you may keep values of that type around forever, as they do not refer to anything that doesn't live forever.
So neither `&'a i32` nor `MutexGuard<'a, i32>` are 'static (unless 'a is), because you shouldn't keep references like that around longer than the stuff it points to. But i32 by itself satisfies i32: 'static, because it's perfectly fine to keep an i32 around forever. (But that doesn't mean that every i32 will stick around forever.)
> These are totally different things with the same name.
Another way to think about it is that “T: ‘static” restricts all references in T to be static, in the same way that “‘a: ‘static” restricts ‘a to ‘static. The case when T doesn’t contain any references is then just a boring base case.
The wording that "'static means different things" is a bit misleading. It means the same thing, but the meaning different because of the context:
Lifetime: lifetime means that the first lifetime outlives (is as long or longer) the second
Type: lifetime means that the type is constrained by the lifetime. Because 'static means "the lifetime of the whole process". That means that the type is not constrained.
Types are not generated at runtime, lifetimes are types (although they cannot be constructed).
T<'a> and T<'b> are different types altogether (although, being generic, they can resolve to the same type if 'a == 'b, but it's not required). All this happens at compile time.
When you say T: 'a you say that T is a subtype of 'a hence it lives longer.
Type T, being generic, can be an OwnedStruct, but also an &'a BorrowedStruct or a StructWithReferences<'a>. The owned one is a subtype of 'static since it's owned, but the ones with lifetimes are subtypes of 'a which itself (being a generic lifetime) can be a subtype of 'static or any other lifetime. Again, all this happens and is checked at compile time. This ensures that at runtime, the actual references are alive as per their lifetimes without explicitly checking them anymore.
You misunderstand; what confuses me is how 'static can mean the same thing in both of these contexts.
T: 'static does not mean the references in T must exist for the lifetime of the application.
Ie. the lifetime constraint on T isn’t the same 'static from &'static where the reference must life for the entire lifetime of the application.
Types are static. T: 'static applies to instances at runtime.
These instances do not have and are not related to the 'static lifetime which is “the entire length of the application”.
I accept:
> Type: lifetime means that the type is constrained by the lifetime.
I don't accept the explanation:
> Because 'static means "the lifetime of the whole process". That means that the type is not constrained.
A does not follow from B.
If you want something to mean "A type cannot have references in it" then invent a 'noref lifetime.
'static in this context would mean that all members of T must be 'static, which means that all instances of T must be 'static.
It does not mean that.
I don’t understand why they have the same name; they are not the same thing; it is an example of failure to have orthogonality in the language design in my opinion.
You seem to misunderstand what type : lifetime means.
> 'static in this context would mean that all members of T must be 'static, which means that all instances of T must be 'static.
Your "which means that" isn't true. It doesn't mean that. The syntax type : lifetime indicates that the values of the type MUST BE ABLE to outlive the lifetime. It doesn't mean that they NEED to outlive the lifetime.
This means, for example, i32 : 'static even in the case doesn't live the whole 'static lifetime. It COULD live, though, if the author of the code allocated it statically.
> 'static in this context would mean that all instances of T must be 'static.
You mean in T: 'static?
No. It means that any instance passed as type T must be bound by 'static and therefore could be held up to the end of 'static. This does not mean that they're allocated at compile time, it just so happens that static variables (allocated at compile time) are 'static but the causality is reversed.
> If you want something to mean "A type cannot have references in it" then invent a 'noref lifetime.
It can have references in it! As long as they're bound by 'static.
> 'static is not the "lifetime of the entire application" when it is used in the context of T: 'static.
Yes it is.
T: 'static means T can live up to the end of the "lifetime of the entire application".
> ...but it can also have values in it which are not 'static.
I don't follow. Values are indeed bound by 'static. If they weren't we wouldn't be able to pass values to other threads (which can potentially last as long as our application's main thread).
> 'static is not the "lifetime of the entire application" when it is used in the context of T: 'static.
It is. Any owned instance without non-'static references inside can live up to the "lifetime of the entire application". You might drop them earlier if you wanted to, but you don't have to since it can live up to the "lifetime of the entire application" and therefore can be passed for example into a thread that can hold the value up to the end of the "lifetime of the entire application".
Any owned value can be held indefinitely as long as the program is running.
I'm taking a guess here: you mean that values' lifetimes can be constrained (I guess you mean by dropping the actual value). But it's the owner the one that ended it earlier, not the caller (where 'static applied). You will never be able to have an owned value with a lifetime shorter than 'static without dropping it and, if you drop it, you cannot pass it anywhere. Hence why any owned type that is passed is, by definition, bound by 'static.
> ...but my take on it is:
> - IF you take "x is 'static" as meaning the "X is valid for entire lifetime of the application"
> then if:
> - x: &'static 'is static' and must be valid for the entire lifetime of the application.
> I would expect:
> - x: T + 'static 'is static' and must be valid for the entire lifetime of the application.
Your expectations are correct and that's what it is. Just replace "must be valid" with "can live up to".
That's why you need to pass T: 'static to threads, because a separate thread needs something to hold up to the end of the application since a thread can potentially never end.
Fair I guess; we're just arguing semantics. The point I was originally making is that:
'a: 'static is read "'a outlives 'static" [1]
T: 'static is read "T is bounded by 'static" (apparently, although I can't find a reference to it).
The syntax is the same, the meaning is different.
Maybe the 'static part of these two things is the same, but they seem to me to:
- mean different things
- use the same syntax
It is what it is I guess... just confusing as to why it was decided to use the same syntax for these things, instead of something different.
You could argue that adding new syntax makes the language more complicated; but I'm not sure. Does it make it less complicated when you overload the same syntax with multiple different contextual meanings?
Notice how bar2 has a &'a str where 'a: 'static and it can be happily passed to bar.
Of course in T: 'static you're subtyping a type and in 'a: 'static you're subtyping a lifetime (which implicitly subtypes the type that it's applied on)... but the ": 'static" part means exactly the same: "the left part of this bound can live up to the end of the application". Whether it's a lifetime, a borrowed type or an owned type does not matter.
> It is what it is I guess... just confusing as to why it was decided to use the same syntax for these things, instead of something different.
Because they are the same.
We could of course separate them into 'noref, 'yesrefbutstatic, 'staticref, etc. But then we'd have a needlessly restrictive std:.thread::spawn that would accept only Owned, or only Owned<'static> or only &'static Referenced. The implications are the same, hence they're the same. We wouldn't gain anything and we'd be needlessly restricted.
A simpler way to put it: owned values have an implicit 'static lifetime.
What you say makes sense... but I struggle to reconcile it with reality.
If "the left part of this bound can live up to the end of the application" then why is &a not 'a where 'a: 'static? a can live up to the end of the application. &a can live up to the end of the application.
Why is the result "argument requires that `a` is borrowed for `'static`"
fn foo<'a: 'static>(a: &'a str) { println!("{}", a) }
pub fn main() {
let a = "hello world".to_string();
foo(&a);
}
So I guess I'm going to have to just agree to disagree on this one and bow out of this conversation thread I'm afraid.
If you do 'a: 'static, then you just said 'a is at least as long as 'static.
When you borrow the String, you just created a lifetime that is not as long as 'static. Variables are dropped (and hence unborrowed) in reverse order. So 'a will necessarily be dropped before its owner, hence it's shorter than 'static.
As you can see it complains that it's dropped at the end of main() even though it should be borrowed for 'static. If this was an owned value it would NOT be dropped at main() since it would be moved into the function on call.
Nothing surprising here.
> &a can live up to the end of the application
Nope. Imagine spawning a thread and passing &a but keeping 'a' in your main thread.
> a can live up to the end of the application. &a can live up to the end of the application.
Nope, a reference to a stack frame can't live up to the end of the application. The stack frame gets deallocated and the referent ceases to exist; therefore the reference you're creating in your linked example doesn't outlive 'static. `main` is not an exception to this in Rust.
> T: ‘a is T outlives ‘a; but types are not generated at runtime.
Yes, that’s true, but you can make it a bit more specific by saying “all the references in T outlive ‘a”. Then it should make more sense, as references are generated at runtime.
This gets to the heart of what makes Rust an interesting language, as references are generated at runtime, but how long those references are valid for is checked at compile time using lifetime bounds.
A type constraint like T: ‘a or T: Trait is constraining values of type T, not the type T itself. That is T: ‘a is saying “all references in values of type T live at least as long as ‘a”. This includes ‘a = ‘static and the vacuous case of T containing no references.
That is the best article on Rust I've ever seen. Better than the Rust book. I've been saying that Rust needed a book that wasn't written by the designers of the language, who are too close to it. Now we have one.
But it only covers the very basics. Disclaimer: I've only scrolled through it, but the code snippets are all small. It does show an admirable amount of Rust's syntax, but it's not very discoverable, whereas the book is reasonably well structured (although not for novices, who have forgotten a specific term). The link also doesn't deal with larger programs: there's not an Rc<T> on the page, let alone something like an Arena and unsafe code, and it's really necessary for complex programs.
So, IMO: not better than the Rust book. It gives a first glance of Rust, something to accustom the eyes to a different syntax, but sidesteps the difficult bits, which can make transitioning so frustrating.
I have a lot of Rust articles[1] that go into a lot more depth. They're mostly adventures though, we learn about ICMP, ELF, file systems etc.
Some love that style (the detours, the stream of consciousness) and some can't stand it. The Rust book is excellent, and a completely different style — they're complementary!
I've favourited the article. It's a good complement to the Rust Book, which has more detail. This kind of summary doc is great for discovery, you skim it and you find something that you could have written differently, or you realise there's an idiom you weren't using.
It's your opinion and I'm not arguing: a lot of people love the Rust book, but some (including me) simply can’t read it because of the style of writing and cases when you discover some interesting and deep topic but the author just gives 1 example in 5-10 lines of code and that's all. I think that things like Mutexes or Boxes are deep enough to have 30-50 pages dedicated to them, and I can say this not only about Mutexes and Boxes. When explaining some powerful things, one need not only explain how they work, but also what kinds of real tasks they can solve - ways of usage are obvious for you when you know it and have experience of usage them, but absolutely opaque and not obvious when you reading about them for the first time.
I highly recommend Jon Gjenset's streams for more advanced Rust concepts like lifetimes or interior mutability. He goes into depth explaining how they work, what kinds of problems they solve, and gives tons of great examples.
There are a few other well regarded resources. Programming Rust from Blandy, Orendorff, and Tindall/O'Reilly is great if you have lots of knowledge in C/C++. Rust in Action from McNamara/Manning is great if you prefer to learn by building projects.
You may be also interested in the many existing teaching resources published by non-language-designers (whatever that means for a community-driven project like Rust), such as Rustlings and Rust by Example at https://www.rust-lang.org/learn if the book doesn’t satisfy whatever criterion you think disqualify it. Some of it is published via the rust-lang.org website, but it was/is maintained by different people who work on the language, compiler, std lib (etc). The project has a large number of people across many teams: https://www.rust-lang.org/governance
Rust Crash Course by Michael Snoyman is great. You can get a free copy at fpcomplete.
Did a free course in December with him, and I'm still watching the classes because it has so much great info there.
Also I do recommend the Rust Programming, it really teaches you "how/why" use Rust.
To conclude, rust-learning (at GitHub) have great links, amazing articles about parts of the language, there is so much knowledge there, just go collect it :P
Out of curiosity, what don't you like about the Rust book? I personally loved it, reading about a year and a half ago. I thought they covered most of what I needed, was well written, and had plenty of good examples.
Not all writing clicks with every reader, it is impossible.
He previously said “That's too hard for an introductory book and not detailed enough for a reference manual.” It is very difficult to get this right, given our goals, and so reasonable people may think we fall short there.
Sarcasm aside, the Rust compiler has gotten a lot faster (and parallelizes better) over the last year thanks to Nicolas Nethercote and others.
Another thing few people realize is that the "incremental" compilation mode that's the default for debug builds can also be enabled for release builds!
In CI, something like sccache can help a lot (using the GCS or S3 backend). It makes GitHub Actions' two-core limit almost bearable. Almost.
If anything, code snippets take time to parse and understand what every word and symbol mean. I kept going over "traits" because I didn't get them at first.
Yep, code is a lot of "words" which throws off the estimate. I need to address this, but I don't think completely ignoring code blocks is really the solution there.
Maybe you could estimate the reading time of a code snippet by taking the square root of the number of lines of code? I imagine that the larger the code snippet, the more irrelevant boilerplate it contains, so the square root might be a good way to model this
For anyone curious what a Go version of the same article would look like (as I was), I tried to answer that question by writing https://dmitri.shuralyov.com/blog/27.
Was initially sceptical to this but after skimming think this is actually a very good introduction. I do like the brevity of it compared to the walls of text in the official rust one. Although this does give an illusion that Rust is very simple/easy to learn but any beginner will find that anything more than "Hello World" or "2 + 4" is 10x more difficult than writing in a interpreted language. Not because Rust syntax is difficult but the constraints it introduce require more thought of the design of the program.
The article also skims over the single ownership model which is a big difference with Rust. And it does miss about creating macros, cells, threading & mutexes, heap allocated types, unsafe code, Rc, std::mem, and the whole crates and cargo ecosystem. Which is why I think this should have been titled "half-hour introduction to Rust syntax". Hopefully this does bring more people on the Rust train. I think 2021 will be a good year for Rust :)
Since a lot of folks take issue with the article's title, think of it as "If you had only a half hour to learn Rust, here's what I'd tell you".
If you have more, do engage in a joust^W friendly collaboration with the compiler, which will get you quite far since diagnostics are a first-class feature :)
I was a complete Nube an hour ago... I understand info from Nubes has special value, so I'll be as explicit as I can.
I must be slow, being an old fart... I've been at it for 53 minutes and got about 1/2 way before all the questions in my brain stacked up to "full".
I'd add a recommendation at the top of this to have a Rust compiler handy.
{} were called braces when I learned programming, [] were brackets... this tripped me up, Unicode calls these {} curly brackets ?!? (Why did it trip me up? Because on my screen in non-dark mode, { and [ look identical due to my eyesight, and I assumed I was looking at [ because the text said that's what it was... as you get older, you'll understand)
I don't understand why
b=a;
c=a; <-- doesn't work because a is "used up"???
I'll get a Rust compiler and start again.
I'm allergic to "macros" as they have had a special place in hell because of their misuse in C... I hope Rust is more sane.
It's definitely a lot to take in, you have the right idea — the compiler is here to help, the diagnostics are wonderful and improving every week thanks to the work of Esteban Kuber and others.
Re macros: try to keep an open mind if you can, C macros and Rust macros are completely different. You can get very far without reaching for them, so don't worry too much!
It's reasonably full-featured for a web IDE (much more so than the Go playground), and it includes many commonly used packages.
> Because on my screen in non-dark mode...
The little sun icon in the lower left corner of the page turns on dark mode! Hopefully that helps.
> I don't understand why b=a; c=a; <-- doesn't work because a is "used up"???
This is something called "linear typing", and it's admittedly pretty unusual in a mainstream language.
The core idea is that the assignment operator _only ever_ creates a "shallow" (bitwise) copy of data; it never invokes anything like C++'s copy assignment operator. For types that are "plain old data" (like primitives), the old and the new values are fully independent, so the assignment works the same way it would in most languages, i.e., `a` is not "used up". This is what other commentors mean when they say that primitives "implement `Copy`". But if the old value has pointers or references, then the two values are not independent: after the bitwise copy, they both have pointers to the same data. Since data can only ever be shared explicitly in Rust, and the assignment operator never performs a deep copy, the old value, `a`, is considered invalid and cannot be re-used.
If you're familiar with C++11 or later, one way to think of it is that `=` in Rust always behaves somewhat like `std::move` in C++:
`b = std::move(a);`
The details are substantially different (this will call `a`'s move-assignment operator if one exists, which has no equivalent in Rust, and C++ offers no support for ensuring that `a` is no longer used if its move-assignment operator invalidates it). But the general idea that "move semantics are on by default" is essentially accurate.
Regarding a getting used up, the Rust compiler enforces a single-ownership principle where all values must have a single owner. If you move ownership of a to b, you cannot use a anymore as it no longer has ownership of the value.
The only exception to the above is if a type "is Copy". This means that values of this type can be copied very cheaply, and in this case the compiler will allow you to use it multiple times, which is implemented by it being duplicated on each use.
Rust is tricky. Took me about a year to grok decently.
Regarding references, the compiler does flow analysis and tries to enforce a sort of static rwlock semantic on variable level. You either have a single mutable reference xor N readable references to the same variable.
If b is a mutable reference to a, c cannot point to a until b reliquishes the "lock", which happens when b goes out of scope.
You keep using this word, variables. I don't think it means what you think it means.
At least in my book pointers are still variables (as in "a pointer variable"), and a variable is any named value, whether it's a scalar or a pointer or a nth-pointer, or what its storage is.
But you mean that in your example there is no way to affect the previous value, right?
>I just googled it... why the heck would you copy pointers? That's insane!
Well, copying values and passing them around would be too costly on memory (for larger structs especially), and would prohibit several techniques.
Variables can be varied... you can increment, decrement, do whatever you want with them. They keep track of things.
Pointers are used reference things allocated from the heap, and never anything else, unless you're insane. Pointers get
directly handled in linked lists, trees, etc.
If at all possible, pointers should be avoided otherwise.
Values passed to a procedure can be done by value (the default in Pascal), or by reference (VAR parameters).
I don't see why anyone wouldn't copy values by default... it is the only sane way to do things.
> Pointers are used reference things allocated from the heap, and never anything else, unless you're insane.
People routinely use pointers to things on the stack in several languages. Rust even makes it safe to do that, using lifetimes - a pointer to something on the stack can't outlive the stack frame it points into.
> I don't see why anyone wouldn't copy values by default...
Because not everything is safe to copy. In particular, Rust has a few kinds of pointers which come with special rules.
Firstly, it has boxes, which always point to something on the heap, and have a rule that that when the pointer dies, the thing it points to gets freed. If you copied a box, then when one of the copies died, the thing would be freed, and then the other copy would have a pointer to invalid memory, which would be bad.
Secondly, it has mutable references, which come with a guarantee that a mutable reference is the only pointer to a given thing. If you copied a mutable reference, you would break that guarantee.
Thirdly, it has reference-counting pointers (these are in the standard library, not the language). You can make duplicates of those, but they have to increment their reference count when you do so. Copying is always just a bitwise copy, so there is no chance to increment the reference count. Instead, duplication is an explicit operation.
There are a few other things it doesn't make sense to copy. Like, what would it mean to copy a mutex?
So, in Rust, you can't copy by default. However, it is really easy to mark a type as being copyable (the compiler will check that it really is, ie doesn't contain any non-copyable things), and then you can copy it.
> Variables can be varied... you can increment, decrement, do whatever you want with them.
Variables by themselves are not much. They inherit the properties of the type they are bound to. So what you can do with them can be as restrictive or as permissive as the type allows. That type can be a pointer/reference as well.
"I don't understand why b=a; c=a; <-- doesn't work because a is "used up"???"
Had the same issue a few months ago.
The pointer can only be referenced by one variable at a time.
If you "move" the pointer from a to b then a is figuratively "used up" because a is now blocked from doing anything with the pointer anymore.
I guess, for Rust itself the pointer is still referenced by a, Rust just blocks you from using it. But thinking you moved it to another variable helped me a bit.
That will work fine, because 2 is 'Copy', so it can be copied trivially (as are structs consisting only of Copy members which are marked as Copy). Roughly speaking anything with a pointer in it isn't Copy, though many objects will implement Clone, which basically just means you need to explicitly call .clone() to make a copy instead of it happening implicitly.
You need to seriously read more on Rust before making judgments. You're clearly confused.
What '2 is Copy' means precisely that Rust WILL NOT 'introduce pointers where they don't need to be' - 2 is fine to 'alias' because it's a primitive type and so a copy is made when you do that, (i.e. the type implements the 'Copy' trait). However when you do that with complex types that do not implement Copy, you're moving ownership to the new variable so cannot use the old one any more.
This is precisely what I was looking for when I searched for “Rust for C++ programmers” in the past. I love the Rust book, but I’ve always thought the common focus on accessibility for newbie programmers made it feel tedious for those who are more seasoned. I’d love to see more writing like this, maybe even for other things like libraries!
I've found it's generally really tough to find guides for getting into new languages that are written for experienced programmers who don't know that language yet. Everything seems to be written for complete newbies, or be a dense reference that assumes you know almost everything already.
It's a bit annoying, but not much to do except slog through the guides and try to write a few programs.
I found the Rust book to be anything but tedious, as a seasoned programmer. As I read it, it made me want to cry thinking back on past bugs and issues that I had dealt with that the language had been designed to prevent.
It made me think of the days and weeks of debugging that I wouldn’t have needed to do, had Rust existed.
New to Rust, was excited to see Prolog anonymous variables make an appearance. But one thing struck me is the mismatch between variable and function declarations...
let x: i32 = 42;
vs
fn fair_dice_roll() -> i32 { 4 }
It seems the designers missed an opportunity, as the latter could quite easily have been:
With type ascription being optional in some places, I have to wonder if using the colon for both type ascription and fn return type would introduce any parsing/grammar ambiguities.
I personally like the arrow being used for return types as it highlights pretty well what returns stuff. I don't need to track fn definitions or anything when scanning a file - I just need to look for the arrows.
The way I see it, that would mean the function 'fair_dice' has type 'i32'. While in this case it maybe fine, if the function had arguments, it would be ambiguous.
It only took me a couple of hours to port one of my old C programs, an utility to produce a hexadecimal and ASCII dump to Rust. The Borrow checker didn't even faze me. All I needed was plenty of Google-fu and my coding skills. JetBrains CLion and git are also very useful tools.
I had a similar experience porting a relatively straightforward and simple CLI utility I had written in C.
Having said that, when I've faced real challenges with Rust (and the borrow checker) it's been with bigger longer running applications, like a webapp, or a long running service.
I have no doubt part of that comes I have a stronger background in garbage collected languages, so my mindset when developing larger applications is in that mode. I'm sure with enough practice I'd get it, but there were many things that I just couldn't replicate one to one in Rust. Not that they weren't possible, but they were just different enough that I couldn't figure it out without being more comfortable in understanding the language, and I just haven't dedicated the time needed to it.
Will likely go back to it at some point, as it is an interesting approach to programming, with lots of upsides.
My biggest issue was trying to print the contents of u8 as a character. In the end this was actually as simple as writing print!("{}", line[i] as char), no print specifiers needed.
I may do another more complicated port soon. I love CLion.
> It only took me a couple of hours to port one of my old C programs, an utility to produce a hexadecimal and ASCII dump to Rust.
Writing C code with Rust is a very different experience than converting C code to idiomatic Rust. There is more to porting than doing 1:1 conversions between keywords.
And by your description your utility basically reads words and prints their value. That doesn't really venture too much beyond the hello world territory.
When you write a program, heap variables are allocated using NEW or Malloc... stack variables are local to a procedure or function, the rest are in the Var block before your code, and are neither on the heap, nor the stack. They're global to the program, or the unit.
In some (interpreted mostly) languages they are. Like some also don't have a stack at all.
But in the most common languages, there is special storages for globals, statics, and constants, which is what the grandparent means (e.g. the DATA section).
I have been doing the rustlings exercises (which are basically exercises to fix errors on various topics) - they are just the right size to keep you engaged and continuing because they reward you with success. Also its very easy to leave it anywhere and resume exactly the same point or go back to an old exercise and retry it. Plus because its just code organized in a directory, you can leverage all the feature of your favorite IDE. Highly recommended.
why is there `move` needed, if both `answer`, and the arg of `Fn`, seem to be references (`&str`)? To a layman, this sounds like as if both "challenge" and "answer" should be borrowed - so why "move"? what's even to move here?
This is confusing, and Rust's error messages make it much worse. If you try to remove the `move`, you get an error:
> closure may outlive the current function, but it borrows `answer`, which is owned by the current function. To force the closure to take ownership of `answer` use the `move` keyword.
But 'answer' is of course not owned by the current function, and how can you take ownership through a shared reference?
The explanation is double indirection. By default, the closure captures a pointer to answer, which is itself a pointer on the stack. Without the 'move', inside the closure `answer` has type &&str and points into make_tester's stack frame. With the 'move', it copies the passed-in pointer. The error message is referring to the pointer itself, and this is not obvious.
Incidentally I have never found docs for the '+' syntax there, would appreciate a pointer to any.
> But 'answer' is of course not owned by the current function
As you imply later, ‘answer’ itself is owned by the current function, but the ‘answer’ value is a pointer that doesn’t own what it points to. This subtlety, that references are full values themselves, is definitely something to trip on, especially when double indirection starts popping up like this.
I believe what happens here is that the closure tries to capture it as a &&str. This is because it defaults to trying to take the environment by reference. The "move" in this case means that it will try to take it by value instead. This feels tricky semantically because references are types too, so "taking it by value" means taking a &T rather than a &&T, just like you may think about how taking a T is by value as opposed to &T.
In this closure you are moving `answer`, that is, you are moving the reference itself into scope. The `challenge` reference is already owned by the closure since it's a parameter. The move doesn't mean the closure owns the underlying strings, but the references themselves.
It's a great resource. I like such easy list approach, because many times new language just needs a translation from concepts you already know.
I have a problem understanding how temporaries work in Rust. At least from what I see there is a difference between a temporary inside a vec that I bind somewhere to then pass it somewhere and a temporary inside vec constructed inside method invocation. I asked on SO [0], but did not receive satisfactory answer, or I'm too dense to understand it.
>In this article, instead of focusing on one or two concepts, I'll try to go through as many Rust snippets as I can
I really like this. One of the things I like to do when learning a new language is to go to http://www.rosettacode.org/wiki/Rosetta_Code and just go through snippet examples for certain common tasks.
Just quickly going through examples to me feels much easier than trying to learn from principles.
I should have said :) I am writing backtracking algorithms (think something like a Sudoku solver where we fill in values by guessing), and an 'Error' is when we can deduce the Sudoku cannot be filled in, so we have to backtrack.
I find Rust's ? notation gives a very natural way of writing such algorithms. I don't care "why" filling in the Sudoku failed, particularly because the solver will typically fail millions of times a second, so I definately don't want constructing the error to be expensive in any way.
IIRC, it didn't work with Option in the past, so you might have been correct when your code was written. The ? operator came originally from the try!() macro, which was specific to Result; unlike the macro, it was designed to be extensible through the still-unstable std::ops::Try trait, but IIRC that trait was initially implemented only for Result, and only later was extended to Option and a couple of others.
It's sometimes called a type-level boolean. Instead of trying to remember what true or false means, you have explicit variants associated with success or failure. (The underlying representation, including bit width, is the same)
I recommend typing out the entire article as in into an editor. After you work your way through it, try the rustling course, Rust by Example, the CLI tutorial and then read The Book.
SRS is really good for learning small, compartmentalized knowledge items. Its usefulness degrades quickly when you move from knowledge to concepts. I'm currently learning Japanese and using several SRS tools to do so. The ones that work the best are for characters and vocabulary. I also use one for grammar, which is okay-ish but doesn't translate as well to the SRS format.
So in the context of programming languages, SRS might work if you need to memorize method names or method signatures (though I don't see why that is needed in the age of autocompletion and instant lookups from inside the text editor), but not so much for fundamental language concepts (like "how do lifetimes interact with the borrow checker").
I spent a few hours typing this entire article into an editor just to build muscle memory and I'm amazed. It's exactly the sort of article I needed to learn Rust.
I've since thanked Amos on Twitter and I've forwarded links to his blog to a lot of people. This is a great article and does a good job clarifying things to a newcomer.
In the 9th code snippet that's not an example of variable shadowing, it's just variable reassignment. Shadowing involves variable assignments in different scopes.
edit: I should've called it "variable redefinition" or something like that I guess, my mistake. Reassignment is definitely not the correct terminology for this.
Still, it's not shadowing because the new binding of 'x' is not effectively shadowing some other 'x' name, it's just taking its place in the same scope. And this is orthogonal to the memory allocation of the assigned objects.
There is effectively an implicit scope created by the let binding. In rust you can't reassign a variable without it being mut. The two `x`s are separate variables named x and are not at any point mutated. Reassignment would look like this:
let x = 1;
x = x + 1; // <- error[E0384]: cannot assign twice to immutable variable `x`
let mut x = 1;
x = x + 1; // ok!
This obviously matters very little for an integer, but it is relevant to more complex types.
You can actually see the scopes, and the progress of variable liveness, if you run the compiler out to the MIR intermediate language:
fn main() -> () {
let mut _0: (); // return place in scope 0 at src/main.rs:1:11: 1:11
let _1: i32; // in scope 0 at src/main.rs:2:9: 2:10
scope 1 {
debug x => _1; // in scope 1 at src/main.rs:2:9: 2:10
let _2: i32; // in scope 1 at src/main.rs:3:9: 3:10
scope 2 {
debug x => _2; // in scope 2 at src/main.rs:3:9: 3:10
}
}
bb0: {
StorageLive(_1); // scope 0 at src/main.rs:2:9: 2:10
_1 = const 1_i32; // scope 0 at src/main.rs:2:13: 2:14
StorageLive(_2); // scope 1 at src/main.rs:3:9: 3:10
_2 = const 2_i32; // scope 1 at src/main.rs:3:13: 3:18
_0 = const (); // scope 0 at src/main.rs:1:11: 4:2
StorageDead(_2); // scope 1 at src/main.rs:4:1: 4:2
StorageDead(_1); // scope 0 at src/main.rs:4:1: 4:2
return; // scope 0 at src/main.rs:4:2: 4:2
}
}
No, it really is shadowing. The second `let` allocates a new memory location in the activation record; all future mentions of the variable name will refer to the new location. The old location still exists, and may have references to it already; references that will continue pointing at the old allocation and will not see anything that happens to the new one. If the old value has a `Drop` implementation, it will run after the second allocation’s `drop()`. The two `let` bindings can even have completely different types.
In Rust, shadowing is when you "declare a new variable with the same name as a previous variable" [1] -- even in the same scope. It seems it "fell out" of how the original compiler was implemented, and there was more support to retain it than not. See: https://internals.rust-lang.org/t/history-of-shadowing-in-ru...
There is literally a Rust example in your wiki link saying that this is shadowing. You can even redefine the variable to have a different type, how can this be reassignment.
Very nice. As a very experienced programmer who has now learned more languages than I can remember, I generally look for these kinds of cheatsheets whenever I'm learning a new language, followed by the official documentation. Other than that, I don't personally find tutorials very useful.
The cheatsheets give me a quick intro to syntax conventions, and then the official documentation provides all the detail on specifics. I'm looking to mentally map how different PL constructs common across different languages are expressed in the new language I'm learning.
Along the way, I end up learning one or two new PL concepts. Fewer these days than in the beginning, hence it gets easier and easier to learn new languages.
I've never used Rust before so I'm exactly the target audience. I first got tripped up around
> Trait methods can also take self by reference or mutable reference:
impl std::clone::Clone for Number {
fn clone(&self) -> Self {
Self { ..*self }
}
}
What's the asterisk doing in this code? I guess it's destructuring the struct somehow, but I don't see that syntax elsewhere: destructuring was introduced but looked like it worked without the star. Alternatively, it's dereferencing the reference, but that seems less likely.
I also find some of the syntax peciliarities and their combinations difficult to look up. In addition to & and *, there's question marks, dots, double dots, square/round/triangle brackets, empty parentheses and apostrophes all over the place and it makes it really difficult to deconstruct what's happening in a block of code.
technically yes, but it'll change the semantics in a not useful way. Because if you leave off the & on self then the method won't borrow self (take it by reference) but consume it (take it by value by 'moving' it), so you can't use it any more after calling clone(), which basically makes the method a no-op instead of copying the struct.
Thanks pornel and rcxdude, this makes sense to me now. I was thinking it shouldn't be dereferencing because when references were previously introduced (the print_number example), we could call methods on the reference without any special syntax. But I guess it makes sense that instance methods work that way, while other functions need to know if we have a reference or the real object, and in this regard it works more or less like C++.
Literally the most easiest way to understand Rust.
Just want to take a moment a appreciate your efforts for writing this amazing article.
Honestly it made it so easier to understand Rust especially for a beginner like me
But, the languages are so different, for instance in Swift
let x
x = 42
let y = 13
let y = y + 3
Wont fly for instance. I am trying to come up with the simplest explanation possible for this but I can't - goes to show how much I understand either of the languages.
(Side note: dear Apple, please add syntax coloring and code formatting for Rust to Xcode. You have Fortran and C Shell in the list, but not Rust or Go, which are much more popular!)
I was inspired by this article and wrote "Twenty minutes to learn Go" [0]. Go being a much simpler language than Rust, I feel that anyone who knows a programming language can become productive in Go in less than an hour. Of course this simplicity comes at the cost of increased boilerplate, less expressiveness...
What I would like to also see in these types of well written articles is the answer to questions like "what can you do with Rust that you can not do with C?"
Most answers will focus on safety, but setting that aside, there's one thing Rust has which AFAIK C doesn't have: fat pointers. They're the implementation detail behind both array slices (keeping together the pointer to the first element and the length) and trait references (keeping together the pointer to the object and the pointer to the vtable).
Preface: This is an honest question, not an attempt to start a "which language is better" war.
I'm proficient in C#. I haven't yet encountered any problems I can't solve with C# and the .Net open source ecosystem. (Perhaps that says more about the banality of the problems I'm solving than about .Net, but there you are.)
What would Rust give me the tools to do that I can't already accomplish with C# / .Net Core?
Rust is a systems (i.e. lower level) language similar to C and C++ that can be compiled to native standalone executables.
If you're primarily doing app dev work, be it desktop or web, you wouldn't really benefit much from it. Although I think Rust code can be compiled to wasm so you could use it as part of webdev work.
I've mostly used web-specialized programming languages like JavaScript and PHP, and some Python. Some of the unique characteristics of Rust have been trickier for me to pick up.
This looks like a fun and useful article, but I'd also recommend Rustlings to anyone interested in learning Rust.
Colour me impressed. Spot check on the python thing I build yesterday has an equivalent
>A rust device driver for the Bosch BME280 temperature, humidity, and atmospheric pressure sensor and the Bosch BMP280 temperature, and atmospheric pressure sensor
Rust is a language that just doesn't go into my head.
And whenever I have a question the community either downvotes or ignores me.
The readability is bad.
The error messages don't make sense.
This type of post only covers the syntax basics.
There's nowhere to turn to for applied concepts.
It's like Ultima Online back in the day. Everyone loved it but every time I tried i fell asleep and just couldn't get into it.
I'm semi happy with Go. It's a hands on language that is fast enough. But it's missing elegance and something I can't put my finger on is missing.
But Rust, it feels like a step back.
What do I think in? Boxes? Wrap unwrap move what? It's so hipster I just hate it. And compilation takes a lot of resources and time. Hell, Java makes more sense to me than Rust.
And why all those macros?
And impl for order is backwards.
Why couldn't the word class be used, no they just want to be different for the sake of being different. It's a language for hipster rebels without a cause.
Rust does not have sealed traits, though you can sorta contort the privacy system to do similar things if you want. We also do not have structural typing. (Generally; tuples are structurally typed but nothing else is)
A lot of Rust’s language people did a lot in Scala. The language’s are pretty different IMHO.
As was explained to me, Rust has heritage from OCaml. If one squints, they can even see it still beneath the surface. I hear the original compiler was even written in OCaml.
I'm curious what other what other developers who primarily use Python think of this article and Rust in general?