I came here to comment the same thing. Senders/receivers have the added advantage of not allocating unlike coroutines that have to (usually) allocate on the heap.
I haven't compared performance with the P2300 proposal yet. It seems like it's trying to unify asynchronous and parallel execution for C++, which is much broader in scope than my library.
It's true that coroutines can avoid heap allocation, but I haven't tested when or if that happens in my implementation. From the papers, it's clear that certain conditions must be met for the compiler to optimize this. If you know of any good sources on this, please let me know.
I think it's definitely worth looking into this optimization and possibly using custom allocators for specific cases. I'll also compare performance with the proposal's implementation[1] to see the difference.
Having it available means we can use it explicitly. For example, I could see a compiler flag making `std::vector<T>::operator[]` be checked and then if profiling warrants, remove the check by explicitly checking if my index is out of bounds and invoking UB. Not saying that’s the pattern people will use, but having an escape hatch makes safer-by-default behavior more approachable.
As a C++ user, shared_ptr is great for some things, but it is an anti-pattern. Shared_ptr<const T> is much much better. The problem is that shared_ptr<T> isn’t value-semantic and so destroys local reasoning. That said, there are places for it, but it’s very easy to make a mess with it.
I’m a huge gain of stlab::copy_on_write<T>, which is fundamentally very similar but which is value-semantic, doesn’t let you make loops, and gives You local reasoning.
Can you express value semantics in Python, or is everything still passed by mutable reference still and duck-typed at runtime? I’m being flip, and I like Python for some things, but for large robust performant software I find C++ way easier to work with as it works with me not against me to wrangle complexity.
I thought that too. I think the point there was that you don’t have to push this notion as afar as in/out of functions: that just flipping them within a function can be beneficial.
I agrée with them and with you. It looks like they work in some poor language that doesn’t allow overloading. Their example of `frobnicate`ing an optional being bad made me think: why not both? `void frobnicate(Foo&); void frobnicate(std::optional<Foo>& foo) { if (foo.has_value()) { frobnicate(foo); } }`. Now you can frobnicate `Foo`s and optional ones!
For list comprehension, we have (C++23): `std::ranges::to<std::vector>(items | std::views::filter(shouldInclude) | std::views::transform(f))` it’s not quite `[f(x) for x in items if shouldInclude(x)]` but it’s the same idea.
To be honest, if that's the notation, i will not be very eager to jump on cpp23. That said, I admire people who's minds stay open for c++ improvements and make that effort.
namespace X
{
using ::f; // global f is now visible as ::X::f
using A::g; // A::g is now visible as ::X::g
}
void h()
{
X::f(); // calls ::f
X::g(); // calls A::g
}
That’s what tests are for. And if `print_table` is factored properly then they won’t want to add flags, they’ll make a new function out of the pieces of `print_table` that has distinct behavior of its own.