I think it is just as (un)true that Rust programmers don't need to worry about those errors, as it is that Java/Python/JS/whatever programmers don't need to worry about them.
And this is really the only meaning for "memory safe" that can apply to anything that runs on real world hardware and operating systems. It's always conditional on the correctness of the compiler checks, the runtime, and in Rust the unsafe blocks.
An important factor here is that, because this scheme makes the trusted/human-verified part of the codebase a bit more extensible than usual, a lot of things that would typically live in the compiler or runtime get moved into libraries, just as a matter of flexibility and architecture.
So as long as people understand that these guarantees are conditional on the correctness of that trusted code, "memory safe" is a pretty reasonable name for what's going on. And I don't think people really expect to be immune to bugs in their compiler or runtime just because the language is "safe"!
> And I don't think people really expect to be immune to bugs in their compiler or runtime just because the language is "safe"!
I would really ask you to rethink what people think and expect. Maybe you work for a company with lots of people who understand this kind of things. I am more used to people who just jump on the "oh this lib does this, let me use it".
Everywhere I read only Rust is memory safe, you don't have to worry about anything (just don't use safe). As I mentioned before, your code might be implicitly vulnerable because of other libs, and if today it's not maybe an issue, it might be in 5 years when you have tons of libs out there rewritten in Rust and "hey they need to use unsafe code, and I can't exclude them".
For me it's important to have a proper definition and I am not happy about all the marketing around it. I still believe it provides great improvements for the code you own, so you can't mess up with pointers, use after free, and weird things like that.
I am a bit confounded by this comment- if people are pulling libraries with no concern for the maintenance or security implications, that's going to be an issue regardless of memory safety!
The lowest common denominator of every team in the world simply cannot be our target audience for every technical term we use. There is a minimum level of background people need to learn before they can be effective, and I don't believe we can get around that just by using maximally-pedantic language everywhere all the time.
If you are actually encountering misleading Rust materials, I certainly support any efforts to clarify them, but in my experience people are actually pretty good about that already. TFA here has an entire section on `unsafe` in Rust, for instance.
It is helpful to have some way to refer to this approach to language design, and "memory safe" (like "type safe") has a long history with a precise definition. But perhaps you have some alternative "forum thread friendly" term in mind that you would prefer over "memory safe"?
And this is really the only meaning for "memory safe" that can apply to anything that runs on real world hardware and operating systems. It's always conditional on the correctness of the compiler checks, the runtime, and in Rust the unsafe blocks.
An important factor here is that, because this scheme makes the trusted/human-verified part of the codebase a bit more extensible than usual, a lot of things that would typically live in the compiler or runtime get moved into libraries, just as a matter of flexibility and architecture.
So as long as people understand that these guarantees are conditional on the correctness of that trusted code, "memory safe" is a pretty reasonable name for what's going on. And I don't think people really expect to be immune to bugs in their compiler or runtime just because the language is "safe"!