In Rust objects can dynamically go in and out of having virtual dispatch. The vtable is only in the pointer to the object, so you can add or remove it. Take a non-virtual object, lend it temporarily as a dynamically dispatched object, and then go back to using it directly as a concrete type, without reallocating anything.
That's pretty powerful:
• any type can be used in dynamic contexts, e.g. implement Debug print for an int or a Point, without paying cost of a vtable for each instance.
• you don't rely on, and don't fight, devirtualization. You can decide to selectively use dynamic dispatch in some places to reduce code size, without committing to it everywhere.
• you can write your own trait with your own virtual methods for a foreign type, and the type doesn't have to opt-in to this.
For a long time Tesla had barely any presence in Europe, and Model S and X were expensive rare cars, from a weird foreign brand. Driving that was a delicate balance between coolness and showing off, and then Musk changed the image from cool to extremely divorced midlife crisis.
Now that EVs are more mainstream, there is already competition from new VW, Kia, Hyundai, Renault, Volvo, BMW, which makes Model 3 kinda bland, and the more popular Model Y looks like Model 3's CAD design stretched vertically by 20%.
Supercharger network doesn't have a significant advantage in Europe, all new cars use the same connector already, so buying a Tesla is a choice. People question why do you buy a car that isn't pretty, isn't cheap, isn't from a "real" car brand, and is associated with Musk's antics.
Stroustrup is right that C++ is under pressure, but I'm baffled that his response to this is so ineffective.
The focus on incremental backwards-compatible changes is only looking inwards, and isn't facing the external pressure. Limiting solutions to ones that don't require rewriting C++ code is not going to stop those who have already started rewriting their C++ code. Banning proposals that create safe subsets or lifetime annotations (d3466 R1 4.4/4.5) isn't satisfying to anyone who is already jumping to new languages with these things.
The direction of C++ WG only cements the perception that C++ is for unfixable legacy codebases. They have decided that having the cake is absolutely critical, and promise to find a way to eat it.
Stroustrup seems to see the safety issue only as a problem with codebases that aren't using Modern C++ (which is a good thing to fix from C++ perspective), but doesn't seem to realize that the bar for safety has been set way above Modern C++. Those moving away from C++ have heard of smart pointers and std::vector. These aren't the solution they're looking for, these are the problems they want to get rid of.
There's address sanitizer, and languages with garbage collectors and runtime bounds checks. There are WASM VMs, and even RLBox that translates WASM back to C that checks its own pointers at run time.
The difficulty is shifting most of these checks to compile time. Proving things at compile time is the holy grail, because instead of paying run-time cost only to make the program crash sooner, you can catch the violations before they even make it into the program, not pay any run-time cost, and provably not have such crashes either.
But that needs reliable static analysis, and C++ doesn't have enough guarantees and information in its type system to make that possible with a high degree of accuracy in non-trivial cases. This is not a matter of writing a smarter tool.
Statically tracking how pointers are used quickly ends up infeasible: every if/else doubles the state space, loops can mix the state in ways that makes symbolic reasoning provably impossible (undecidability), pointer aliasing creates lots of nearly-useless edge cases, and everything done through "escaping" pointers adds the state of the whole program to every individual state analysed, quickly reaching limits of what can be proven. For example, if use of a pointer depends on obj->isEnabled, now you have to trace back all paths that lead to getting this obj instance, and all the code paths that could modify the flag, and cross-reference them to know if this particular obj could have this flag set at this point in time... which can be infeasible. Everything ends up depending on everything, and if you give up and mark it as "unknown", it spreads like NaNs making the rest of the analysis also unknown, and you can't prove safety of anything that is complex enough to need such proof.
Rust and Circle/Safe C++ solve this problem by banning all cases that are hard for static analysis (no temporary pointers in globals, no mutable aliasing, no pointer arithmetic without checkable length, strict immutability, and single ownership and lifetime of memory is baked into the static type system, rather than a dynamic property that needs to be inferred through analysis of the program's behavior). This isn't some magic that can be sprinkled onto a codebase. The limitations are significant, and require particular architectures and coding patterns that are compatible with them. Nobody wants to rewrite all the existing C++ code, and that applies to not wanting to rewrite for Profiles too. I don't see how C++ can have that cake and eat it too.
Is there a mainstream language where this still holds true?
From what I've seen most languages don't want to have a Turing complete type system, but end up with one anyway. It doesn't take much, so it's easy to end up with it accidentally and/or by adding conveniences that don't seem programmable, e.g. associated types and type equality.
To have a mere one in a billion chance of getting a SHA-256 collision, you'd need to spend 160 million times more energy than the total annual energy production on our planet (and that's assuming our best bitcoin mining efficiency, actual file hashing needs way more energy).
The probability of a collision is so astronomically small, that if your computer ever observed a SHA-256 collision, it would certainly be due to a CPU or RAM failure (bit flips are within range of probabilities that actually happen).
You know, I've been hearing people warn of handling potential collisions for years and knew the odds were negligible, but never really delved into it in any practical sense.
Yeah, there is definitely some merit to more efficient hashing. Trees with a lot of duplicates require a lot of hashing, but hashing the entire file would be required regardless of whether partial hashes or done or not.
I have one data set where `dedup` was 40% faster than `dupe-krill` and another where `dupe-drill` was 45% faster than `dedup`.
`dupe-krill` uses blake3, which last I checked, was not hardware accelerated on M series processors. What's interesting is that because of hardware acceleration, `dedup` is mostly CPU-idle, waiting on the hash calculation, while `dupe-krill` is maxing out 3 cores.
Giving good feedback about Rust<>C bindings requires knowing Rust well. It needs deep technical understanding of Rust's safety requirements, as well as a sense of Rust's idioms and design patterns.
C maintainers who don't care about Rust may have opinions about the Rust API, but that's not the same thing :)
There are definitely things that can be done in C to make Rust's side easier, and it'd be much easier to communicate if the C API maintainer knew Rust, but it's not necessary. Rust exists in a world of C APIs, none of which were designed for Rust.
The Rust folks can translate their requirements to C terms. The C API needs to have documented memory management and thread safety requirements, but that can be in any language.
Site owners are increasingly concerned about AI crawlers taking their content without giving anything in return.
I'm afraid that this fight will only get more desperate as AI and AI agents start to replace more and more web browsing.