One of the two other proposals is user defined type decay, which lets you choose what type auto will be deduced as. i.e. "auto x = y", x might not have the type of y, instead it can be anything you choose…
This is like implicit type conversion on steroids. And all this because C++ lacks the basic safety features to avoid dangling pointers.
> lacks the basic safety features to avoid dangling pointers
It doesn't. Unfortunately, C++ programmers choose not to use basic safety features for performance reasons (or aesthetics, or disagreement with the idea that a language should take into account that a programmer might make a mistake, but at least performance is a good one), but C++ actually has quite a few tricks to prevent the memory management issues that cause C/C++ bugs.
Using modern C++ safety features won't completely prevent bugs and memory issues, just like using Rust won't, but the mess that causes the worst bugs is the result of a choice, not the language itself.
Tell that to the designers of the C++ standard library, and the new features being added. They're the ones that keep adding new features that depend on references and pointers instead of std::shared_ptr or std::unique_ptr.
I don't think this is the only reason. If it were, they could easily have added overloads that work with both std smart pointers and with plain pointers for compatibility. Or they could add pointer type template parameters, maybe with concepts for the right ownership semantics.
shared_ptr and unique_ptr aren’t useful for reasoning about the lifetimes of stack-based objects (unless you’re willing to require that such objects always be dynamically allocated, which is often not a reasonable requirement).
Has he? He at least used to be the biggest proponent of it, "just follow these standards and development practices that I had to meticulously develop for the US military, that no tool can automatically check, and you'll be fine!".
Smart pointers were added to the language 14 years ago. You're free to use old C++ with raw pointers and manual memory management, risking dangling pointers, or use modern C++, which provides smart pointers to avoid those issues.
And yet most if not all of the standard library keeps using pointer or reference arguments, not the new smart pointers that would actually document the ownership semantics.
Most arguments to standard library calls don't need to take ownership over memory, using a raw pointer or (const) reference is correct. Generally - smart pointers to designate ownership, raw pointers to "borrow".
If a function takes a raw pointer, you need to check the docs to know if it is taking ownership or not. There is no general rule that applies to the whole of std that functions taking raw pointers assume that they are borrowing the value.
And even if you could assume that pointer parameters represent borrowing, they are definitely not guaranteed to represent scoped borrowing: the function could store them somewhere, and then you end up with other issues. So shared_ptr is the only solution if you care about safety to represent a borrowed pointer. And of that's too costly, but the std designers did care about safety, they could have introduced a std::borrowed_ptr<T> that is just a wrapper around T* but that is used uniformly in all std functions that borrow a pointer and guarantee not to store it.
Why? If all standard functions that take no ownership/keep references are using raw pointers then it behaves same as user code/C++ devs expect: if a function is taking a pointer then it claims no ownership. You take a look at standard_function(T*) and see raw pointer and then can assume it is not taking ownership or keeping references
I would not say stop using it. But just stick to the really needed features, and stop adding more features every 3 years. Nobody can keep up, not the developers, not the compilers... is just insane.
This is like implicit type conversion on steroids. And all this because C++ lacks the basic safety features to avoid dangling pointers.
Stop using C++ already!