"I do agree with it though. But I recognize that it is a different claim than the one I was putting forward as factual."
Maybe the core is that i don't understand why you agree with it :)
Maybe your definition of global correctness is different?
Maybe you are thinking of properties that are different than i am thinking of?
To me, for most (IMHO useful) definitions of global correctness, for most properties, the claim is provably false.
For me, local and global correctness that is useful at scale is not really "user-asserted correctness modulo implementation bugs".
Let's take a property like memory safety and talk about it locally and globally.
Let's just remove some nuance and say lots of these forms of encapsulation can be thought of as assertions of correctness wrt to memory safety (for this example, obviously, there are more things it asserts, and it's not always memory safe in various types of encapsulation) - i assert that you don't have to worry about this - i did, and i'm sure it's right :)
This assertion, once wrong in a local routine, makes a global claim that "this program is memory safe" now incorrect. Your local correctness did not scale to global correctness here, because your wrong local assertion led to a wrong global answer.
Tooling would not have done this.
Does it matter? maybe, maybe not! That's the province of creative security researchers and other folks.
My office mate at IBM was once tasked (eons ago) with computing the probability that a random memory bit flip would actually cause a program to misbehave.
Obviously, you can go too far, and end arguing about whether the cosmic rays affecting your program really violate your proof of correctness :)
But for a property like this, i don't want to rely on norms at scale.
Because those norms generate mostly assertions of correctness.
Once i've got tons and tons of assertions, and nobody has actually proved anything about them, that's a house of cards. Even if they are diligent and right 99% of the time, if you have 100000 of them, that's uh, 1000 of them that are wrong.
and as discussed, it only takes one to break global correctness.
If you want all 100k to be correct with 90% probablity, you'd need people to be 99.9999% correct. That seems unlikely :)
I don't mean that i'm not willing to accept the norm is better - i am. I certainly would agree the average rust program is more bug free and more safe than C ones.
But i've seen too much at scale to not want some mechanism of verifying that norm, or at least a large part of it.
As an aside, there are also, to me, properties that are useful modulo implementation bugs.
But for me, these mostly fall into proving algorithmic correctness.
IE it's useful to prove that a lock-free algorithm always makes progress, assuming someone did not screw up the implementation. It's separately useful to be able to prove a given implementation is not screwed up, but often much harder.
As for norms - I have zero disagreement that rust has better norms overall, but yes, i've seen erosion. I would recommend, for example, trying to do some embedded rust programming if you want to see an area where no rust norms seem to exist under the covers.
Almost all libraries are littered with safe encapsulation that is utterly broken in many ways. Not like "oh if you think about this very hard it's broken".
It often feels like they just wanted to make the errors go away, so they put it in an unsafe block, and then didn't want to have to mark everything as unsafe to encapsulated it. I wish I was joking.
These libraries are often the de-facto way to achieve something (like bluetooth support). They are not getting better, they are getting copied and these pieces reused in chunks, causing the same elsewhere. and FWIW, none of these needed much if any unsafe at all (interacting with a bluetooth controller is not as unsafe as it seems. It is mostly just speaking to an embedded uart and issuing it some well-specified commands. So you probably need unsafe to deal with the send/receive, but not much else).
I can give you links and details privately, i don't really want to sort of publicly shame things for the sake of this argument :)
There are very well thought out and done embedded libraries mind you, but uh, they are the minority.
THis is not the only area, mind you, but it's an easy one to poke.
All norms fail over time, and you have to plan for it. You don't want to rely on them for things like "memory safety" :)
Good leadership, mentoring, etc makes them fail slower, but the thing that always causes failure is growth. Fast grow is even worse, but there are very few norms that scale and survive factors of 100x. THis is especially true when they are cultural norms.
I don't believe Rust will be the first to succeed at maintaining the level of norm it had 5-10 years ago, around this sort of thing, in the face of massive growth and scale.
(Though i have no doubt it can if it neither grows nor scales).
[1] How much global correctness is affected by local correctness depends on the property - there are some where some wrong local answers often change nothing because they are basically minimum(all local answers). There are some where a single wrong local answer makes it totally wrong because they are basically maximum(all local answers).
The closer they are to simple union/intersection or min/max of local answers, the easier it is to compute global correctness, but the righter your local answers have to be :)
> Maybe the core is that i don't understand why you agree with it :)
Because of encapsulation. I don't need to look far to see the effects of encapsulation (and abstraction) on computing.
I read your whole comment, but I really want to tighten this discussion up. I think the biggest thing I'm personally missing from coming over to your view of things is examples. In particular:
> Almost all libraries are littered with safe encapsulation that is utterly broken in many ways. Not like "oh if you think about this very hard it's broken".
Can you show me? If it's really "almost all," then you should even be able to point to a crate I've authored with a broken safe encapsulation. `regex-automata`, `jiff`, `bstr`, `byteorder`, `memchr` and `aho-corasick` all use `unsafe`. Can you find a point of unsoundness?
I don't want a library here or there. I am certain there are some libraries that are intentionally flouting Rust's norms here. So a couple of examples wouldn't be enough to convince me because I don't think a minority of people flouting Rust's norms is a big problem unless it can be shown that this minority is growing in size. What I want to see is evidence that this is both widespread and intentional. It's hard for me to believe that it is without me noticing.
If you want to do this privately, you can email: jamslam@gmail.com
Maybe the core is that i don't understand why you agree with it :)
Maybe your definition of global correctness is different?
Maybe you are thinking of properties that are different than i am thinking of?
To me, for most (IMHO useful) definitions of global correctness, for most properties, the claim is provably false.
For me, local and global correctness that is useful at scale is not really "user-asserted correctness modulo implementation bugs".
Let's take a property like memory safety and talk about it locally and globally.
Let's just remove some nuance and say lots of these forms of encapsulation can be thought of as assertions of correctness wrt to memory safety (for this example, obviously, there are more things it asserts, and it's not always memory safe in various types of encapsulation) - i assert that you don't have to worry about this - i did, and i'm sure it's right :)
This assertion, once wrong in a local routine, makes a global claim that "this program is memory safe" now incorrect. Your local correctness did not scale to global correctness here, because your wrong local assertion led to a wrong global answer.
Tooling would not have done this.
Does it matter? maybe, maybe not! That's the province of creative security researchers and other folks.
My office mate at IBM was once tasked (eons ago) with computing the probability that a random memory bit flip would actually cause a program to misbehave.
Obviously, you can go too far, and end arguing about whether the cosmic rays affecting your program really violate your proof of correctness :)
But for a property like this, i don't want to rely on norms at scale. Because those norms generate mostly assertions of correctness. Once i've got tons and tons of assertions, and nobody has actually proved anything about them, that's a house of cards. Even if they are diligent and right 99% of the time, if you have 100000 of them, that's uh, 1000 of them that are wrong. and as discussed, it only takes one to break global correctness.
If you want all 100k to be correct with 90% probablity, you'd need people to be 99.9999% correct. That seems unlikely :)
I don't mean that i'm not willing to accept the norm is better - i am. I certainly would agree the average rust program is more bug free and more safe than C ones. But i've seen too much at scale to not want some mechanism of verifying that norm, or at least a large part of it.
As an aside, there are also, to me, properties that are useful modulo implementation bugs. But for me, these mostly fall into proving algorithmic correctness.
IE it's useful to prove that a lock-free algorithm always makes progress, assuming someone did not screw up the implementation. It's separately useful to be able to prove a given implementation is not screwed up, but often much harder.
As for norms - I have zero disagreement that rust has better norms overall, but yes, i've seen erosion. I would recommend, for example, trying to do some embedded rust programming if you want to see an area where no rust norms seem to exist under the covers.
Almost all libraries are littered with safe encapsulation that is utterly broken in many ways. Not like "oh if you think about this very hard it's broken".
It often feels like they just wanted to make the errors go away, so they put it in an unsafe block, and then didn't want to have to mark everything as unsafe to encapsulated it. I wish I was joking.
These libraries are often the de-facto way to achieve something (like bluetooth support). They are not getting better, they are getting copied and these pieces reused in chunks, causing the same elsewhere. and FWIW, none of these needed much if any unsafe at all (interacting with a bluetooth controller is not as unsafe as it seems. It is mostly just speaking to an embedded uart and issuing it some well-specified commands. So you probably need unsafe to deal with the send/receive, but not much else).
I can give you links and details privately, i don't really want to sort of publicly shame things for the sake of this argument :)
There are very well thought out and done embedded libraries mind you, but uh, they are the minority.
THis is not the only area, mind you, but it's an easy one to poke.
All norms fail over time, and you have to plan for it. You don't want to rely on them for things like "memory safety" :)
Good leadership, mentoring, etc makes them fail slower, but the thing that always causes failure is growth. Fast grow is even worse, but there are very few norms that scale and survive factors of 100x. THis is especially true when they are cultural norms.
I don't believe Rust will be the first to succeed at maintaining the level of norm it had 5-10 years ago, around this sort of thing, in the face of massive growth and scale.
(Though i have no doubt it can if it neither grows nor scales).
[1] How much global correctness is affected by local correctness depends on the property - there are some where some wrong local answers often change nothing because they are basically minimum(all local answers). There are some where a single wrong local answer makes it totally wrong because they are basically maximum(all local answers). The closer they are to simple union/intersection or min/max of local answers, the easier it is to compute global correctness, but the righter your local answers have to be :)