Oh man, don't get me started. This was a point in a talk I gave years ago called "Please Please Help the Compiler" (what I thought was a clever cut at the conventional wisdom at the time of "Don't Try to Help the Compiler")
I work on MSVC backend. I argued pretty strenuously at the time that noexcept was costly and being marketed incorrectly. Perhaps the costs are worth it, but none the less there is a cost
The reason is simple: there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented. There is some cost to that - conceptually every noexcept function (or worse, every call to a noexcept function) is surrounded by a giant try/catch(...) block.
Yes there are optimizations here. But it's still not free
Less obvious; how does inlining work? What happens if you inline a noexcept function into a function that allows exceptions? Do we now have "regions" of noexceptness inside that function (answer: yes). How do you implement that? Again, this is implementable, but this is even harder than the whole function case, and a naive/early implementation might prohibit inlining across degrees of noexcept-ness to be correct/as-if. And guess what, this is what early versions of MSVC did, and this was our biggest problem: a problem which grew release after release as noexcept permeated the standard library.
Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.
I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.
*edit 2 also I have since added a heuristic bonus for the "inline" keyword because I could no longer stand the irony of "inline" not having anything to do with inlining
*edit 3 ok, also statements like "consider doing X if you have no security exposure" haven't held up well
> Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.
Even better, the current way of working is broken, WG21 should only discuss papers that come with a preview implementation, just like in other language ecosystems.
We have had too many features being approved with "on-paper only" designs, to be proven a bad idea when they finally got implemented, some of which removed/changed in later ISO revisions, that already prove the point this isn't working.
> I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc).
Do you know if the reasoning for originally switching noexcept violations from UB to calling std::terminate was documented anywhere? The corresponding meeting minutes [0] describes the vote to change the behavior but not the reason(s). There's this bit, though:
> [Adamczyk] added that there was strong consensus that this approach did not add call overhead in quality exception handling implementations, and did not restrict optimization unnecessarily.
I think WG21 has been violently against adding additional UB to the language, because of some hacker news articles a decade ago about people being alarmed at null pointer checks being elided or things happening that didn’t match their expectation in signed int overflow or whatever. Generally it seems a view of spread that compiler implementers view undefined behavior as a license to party, that we’re generally having too much fun, and are not to be trusted.
In reality undefined behavior is useful in the sense that (like this case) it allows us to not have to write code to consider and handle certain situations - code which may make all situations slower, or allows certain optimizations to exist which work 99% of the time.
Regarding “not pan out”: I think the overhead of noexcept for the single function call case is fine, and inlining is and has always been the issue.
> I think WG21 has been violently against adding additional UB to the language, because of some hacker news articles a decade ago about people being alarmed at null pointer checks being elided or things happening that didn’t match their expectation in signed int overflow or whatever.
Huh, didn't expect the no-UB sentiment to have extended that far back!
> Regarding “not pan out”: I think the overhead of noexcept for the single function call case is fine, and inlining is and has always been the issue.
Do you know if the other major compilers also face similar issues?
Things are much better in 2024 in MSVC than they were in 2014. The overhead today is mostly the additional metadata associated with tracking the state, and most of the inline compatibilities were worked through (with a ton of work by the compiler devs). So it's a binary size issue. We've even been working on that (I remember doing work to combine adjacent identical regions, etc). Not sure what the status is in GCC/LLVM today.
I'm just a little sore about it because it was being sold as a "hey here is an optimization!" and it very much was not, at least from where I was sitting. I thought this was a very very good case of having it be UB (I think the entire class of user source annotations like this should be UB if the runtime behavior violates the user annotation)
Do you think optimizations could eventually bring the std::terminate version of noexcept near/up to par with a hypothetical UB noexcept, or do you think that at least some overhead will always be present?
Could the UB version of noexcept be provided as a compiler extension? Either a separate attribute or a compiler flag to switch the behavior would be fine.
It's kinda funny that C++ even in recent editions generally reaches for the UB gun to enable optimizations, but somehow noexcept ended up to mean "well actually, try/catch std::terminate". I bet most C++-damaged people would expect throwing in a noexcept function to simply be UB and potentially blow their heap off or something instead of being neatly defined behavior with invisible overhead.
Probably the right thing for noexcept would be to enforce a "noexcept may only call noexcept methods", but that ship has sailed. I also understand that it would necessarily create the red/green method problem, but that's sort of unavoidable.
Unless you're C++-damaged enough to assume it's one of those bullshit gaslighting "it might actually not do anything lol" premature optimization keywords, like `constexpr`.
`inline` is my favorite example of this. It's a "This does things, not what you think it does, and also it's not used for what you think it is. Don't use it".
> I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.
So instead of helping programmers actually write noexcept functions, you wanted to make this an even bigger footgun than it already is? How often are there try/catch blocks that are actually elideable in real-world code? How much performance would actually be gained by doing that, versus the cost of all of the security issues that this feature would introduce?
If the compiler actually checked that noexcept code can't throw exceptions (i.e. noexcept functions were only allowed to call other noexcept functions), and the only way to get exceptions in noexcept functions was calls to C code which then calls other C++ code that throws, then I would actually agree with you that this would have been OK as UB (since anyway there are no guarantees that even perfectly written C code that gets an exception wouldn't leave your system in a bad state). But with a feature that already relies on programmer care, and can break at every upgrade of a third party library, making this UB seems far too dangerous for far too little gain.
-fno-exceptions only prevents you from calling throw. If you don't want overhead likely you want -fno-asynchronous-unwind-tables + that clang flag that specifies that extern "C" functions don't throw
I'm pretty sure I could see a roughly 10% binary size decrease in my C++ projcts just by setting -fno-exceptions, and that was for C++ code that didn't use exceptions in the first place, so there must be more to it then just forbidding throw. Last time I tinkered with this stuff was around 2017 though.
And based on a few clang discourse threads, it only removes .eh_frame
I think this only effects binary size, which I understand smaller binaries can load faster but not being able to get stacktraces for debuggers and profilers seems like a loss
> there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented
Could you elaborate on how this causes more overhead than without noexcept? The fact that something has to be done when throwing an exception is true in both cases, right?. Naively it'd seem like without noexcept, you raise the exception; and with noexcept, you call std::terminate instead. Presumably the compiler is already moving your exception throwing instructions off the happy hot path.
Very very basic test with Clang: https://godbolt.org/z/6aqWWz4Pe
Looks like both variations have similar code structure, with 1 extra instruction for noexcept.
Pick a different architecture - anything 32bit. Exception handling on 64bit windows works differently, where the overhead is in the PE headers instead of asm directly (and is in general lower). You don't have the setup and teardown in your example
Throwing exception has the same overhead in both cases. In case of noexcept function, the function has to (or used to have, depending on architecture setup an exception handling frame and remove it when leaving.
>Naively it'd seem like without noexcept, you raise the exception; and with noexcept, you call std::terminate instead
Except you may call a normal function from a noexcept function, and this function may still raise an exception.
If you're on one of the platforms with sane exception handling, it's a matter of emitting different assembly code for the landing pad so that when unwinding it calls std::terminate instead of running destructors for the local scope. Zero additional overhead. If you're on old 32-bit Microsoft Windows using MSVC 6 or something, well, you might have problems. One of the lesser ones being increased overhead for noexcept.
It's zero runtime overhead in the good case but still has an executable size overhead for functions that previously did not need to run any destructors.
Very true. Then again, if you don't need to tear down anything (ie. run destructors) during error handling you're either not doing any error handling or you're not doing any useful work.
I’m curious: where does the overhead of try/catch come from in a “zero-overhead” implementation?
Is it just that it forces the stack to be “sufficiently unwindable” in a way that might make it hard to apply optimisations that significantly alter the structure of the CFG? I could see inlining and TCO being tricky perhaps?
Or does Windows use a different implementation? Not sure if it uses the Itanium ABI or something else.
Everyone keeps scanning over the inlining issues, which I think are much larger
“Zero overhead” refers to the actual functions code gen; there are still tables and stuff that have to be updated
Our implementation of noexcept for the single function case I think is fine now. There is a single extra bit in the exception function info which is checked by the unwinder. Other than requiring exception info in cases where we otherwise wouldn’t
The inlining case has always been both more complicated and more of a problem. If your language feature inhibits inlining in any situation you have a real problem
Doesn't every function already need exception unwinding metadata? If the function is marked noexcept, then can't you write the logical equivalent of "Unwinding instructions: Don't." and the exception dispatcher can call std::terminate when it sees that?
Nah that was mostly about extern "C" functions which technically can't throw (so the noexcept runtime stuff would be optimized out) but in practice there is a ton of code marked extern "C" which throws
Well, given that qsort and bsearch take a function pointer and call it, that function pointer can easily point to a function that throws. So I think this applies to all implementations of qsort and bsearch. Especially since there is no way to mark a function pointer as noexcept.
> Especially since there is no way to mark a function pointer as noexcept.
There is, noexcept is part of the type since C++17. In fact, I prefer noexcept function pointer parameters for C library wrappers, as I don't expect most libraries written in C to deal with stack unwinding at all.
Any library implementation that is C++ compliant must implement this. I'm pretty sure that libstdc++ + glibc is compliant, assuming sane glibc compiler options.
but to me these are - again - a user induced problems. I'm interested if a user doesn't do stupid things, should they still afraid that a standard extern C code could throw? Say, std::sprintf() which if I'm not mistaken boils down to C directly? Are there cases where C std lib could throw without a "help from a user"?
I don't think anything in the C parts of the C++ standard library throws its own exceptions. However, it's not completely unreasonable for a third party C library to use a C++ library underneath, and that might propagate any exceptions that the C++ side throws. This would be especially true if the C library were designed with some kind of plugin support, and someone supplied a C++ plugin.
Well, yeah, things can be related to many things, but throwing extern "C"s was one of the motivations as I recall for 'r'. r is about a compiler optimization where we elide the runtime terminate check if we can statically "prove" a function can never throw. To prove it statically we depend on things like extern "C" functions not throwing, even though users can (and do) totally write that code.
The most common place where noexcept improves performance is on move constructors and move assignments when moving is cheaper than copying. If your type is not nothrow moveable std::vector will copy it instead of moving when resizing, as the move constructor throwing would leave the vector in an invalid state (while the copy constructor throwing leaves the vector unchanged).
Platforms with setjmp-longjmp based exceptions benefit greatly from noexcept as there’s setup code required before calling functions which may throw. Those platforms are now mostly gone, though. Modern “zero cost” exceptions don’t execute a single instruction related to exception handling if no exceptions are thrown (hence the name), so there just isn’t much room for noexcept to be useful to the optimizer.
Outside of those two scenarios there isn’t any reason to expect noexcept to improve performance.
There is another standard library related scenario: hash tables. The std unordered containers will store the hash of each key unless your hash function is noexcept. Analogous to how vector needs noexcept move for fast reserve and resize, unordered containers need noexcept hash to avoid extra memory usage. See https://gcc.gnu.org/onlinedocs/libstdc++/manual/unordered_as...
For many key types and access patterns, storing the hash is faster anyway. I assume people who care about performance are already not using std::unordered_map though.
This is the correct analysis. The article's author could have saved themselves (and the reader) a good amount of blind data diving by learning more about exception processing beforehand.
That's quite interesting and a huge work has been done here, respect for that.
Here's what has jumped out at me: `noexcept` qualifier is not free in some cases, particularly, when a qualified function could actually throw, but is marked `noexcept`. In that case, a compiler still must set something up to fulfil the main `noexcept` promise - call `std::terminate()` if an exception is thrown. That means, that putting `noexcept` on each and every function blindly without any regard to whether the function could really throw or not (for example, `std::vector::push_back()` could throw on reallocation failure, hence if a `noexcept` qualified function call it, a compiler must take into account) doesn't actually test/benchmark/prove anything, since as the author correctly said, - you won't ever do this in a real production project.
It would be really interesting to take a look into a full code of cases that showed very bad performance, however, here we're approaching the second issue: if that's the core benchmark code: https://github.com/define-private-public/PSRayTracing/blob/a... then unfortunately it's totally invalid since it measures time with the `std::chrono::system_clock` which isn't monotonic. Given how long the code required to run, it's almost certain that the clock has been adjusted several times...
> in that case, a compiler still must set something up to fulfil the main `noexcept` promise - call `std::terminate()`
This is actually something that has been more of a problem in clang than gcc due to LLVM IR limitations... but that is being fixed (or maybe is already?) There was a presentation about it at the 2023 LLVM Developer's meeting which was recently published on their youtube channel https://www.youtube.com/watch?v=DMUeTaIe1CU
The short version (as I understand) is that you don't really need to produce any code to call std::terminate, all you need is tell the linker it needs to leave a hole in the table which maps %rip to the required unwind actions. If the unwinder doesn't know what to do, it will call std::terminate per the standard.
IR didn't have a way of expressing this "hole", though, so instead clang was forced to emit an explicit "handler" to do the std::terminate call
In MSVC we've also pretty heavily optimized the whole function case such that we no longer have a literal try/catch block around it (I think there is a single bit in our per function unwind info that the unwinder checks and kills the program if it encounters while unwinding). One extra branch but no increase in the unwind metadata size
The inlining case was always the hard problem to solve though
> then unfortunately it's totally invalid since it measures time with the `std::chrono::system_clock` which isn't monotonic. Given how long the code required to run, it's almost certain that the clock has been adjusted several times
monotonic clocks are mostly useful for short measurement periods. for long-term timing wall-time clocks (with their adjustments) are more accurate because they will drift less.
Ah, that's a great correction, thank you!
Yes, indeed, due to a drift, in order to discern second+ (?) differences on different machines (or same machines, but different OSes?), one definitely needs to use a wall-clock time, otherwise it's comparing apples to oranges. There's a lot of interesting questions related to that, but they out of the scope of the thread. If I'm not mistaken the author has also timed some individual small functions, which, if correct, still poses a problem to me, but for measuring huge long running tasks like a full suite running 10+ hours, they are probably right in choosing wall-clock timer indeed.
However, before researching into results any further (for example, -10% difference for `noexcept` case is extremely interesting to debug up to the root cause), I'd still like to understand how the code was run and measured exactly. I didn't find a plausible looking benchmark runner in their code base.
> I didn't know std::uniform_int_distribution doesn't actually produce the same results on different compilers
I think this is genuinely my biggest complaint about the C++ standard library. There are countless scenarios where you want deterministic random numbers (for testing if nothing else), so std's distributions are unusable. Fortunately you can just plug in Boost's implementation.
It's actually really important that uniform_int_distribution is implementation defined. The 'right' way to do it on one architecture is probably not the right way to do it on a different architecture.
For instance, Apple's new CPUs has very fast division. A convenient and useful tool to implement uniform_int_distribution relies on using modulo. So the implementation that runs on Apple's new CPUs ought to use the modulo instructions of the CPU.
On other architectures, the ISA might not even have a modulo instruction. In this case, it's very important that you don't try to emulate modulo in software; it's much better to rely other more complicated constructs to give a uniform distribution.
C++ is also expected to run on GPUs. NVIDIA's CUDA and AMD's HIP are both implementations of C++. (these implementations are non-compliant given the nature of GPUs, but both they and the C++ standard's committee have a shared goal of narrowing that gap) In general, std::uniform_int_distribution uses loops to eliminate redundancies; the 'happy path' has relatively easily predicted branches, but they can and do have instances where the branch is not easily predicted and will as often as not have to loop in order to complete. Doing this on a GPU might be multiple orders of magnitude slower than another method that's better suited for a GPU.
Overzealously dictating an implementation is why C++ ended up with a relatively bad hash table and very bad regex in the standard. It's a mistake that shouldn't be made again.
But reproducibility is as important as performance for the vast majority of use cases, if these implementation-defined bits start to affect the observable outcomes. (That's why we define the required time complexity for many container-related functions but do not actually specify the exact algorithm; difference in Big-O time complexity is just large enough to be "observed".)
A common solution is to provide two versions of such features, one for the less reproducible but maximally performant version and another for common middle grounds that can be reproduced reasonably efficiently across many common platforms. In fact I believe `std::chrono` was designed in that way to sidestep many uncertainties in platform clock implementations.
> Overzealously dictating an implementation is why C++ ended up with a relatively bad hash table and very bad regex in the standard.
What parts of the standard dictate a particular regex implementation? IIRC the performance issues are usually blamed on ABI compatibility constraints rather than the standard making a fast(er) implementation impossible.
However, I personally disagree with them since I think it's really important to have _some_ basic reproducibility for things like reproducing the results of a randomized test. In that case, I'm going to avoid changing as much as possible anyways.
> There are countless scenarios where you want deterministic random numbers (for testing if nothing else), so std's distributions are unusable. Fortunately you can just plug in Boost's implementation.
I don't understand what's your complain. If you're already plugging in alternative implementations,what stops you from actually stubbing these random number generators with any realization at all?
> It's a compromised and goofy implementation with lots of warts.
I don't think this case qualifies as an example. I think the only goofy detail in the story is expecting a random number generator to be non-random and deterministic with the only conceivable usecase being poorly designed and implemented test fxtures.
> What's the point it in having a /standard/ library then?
The point of standardized components is to provide reusable elements that can be used across all platforms and implementations, thus saving on the development effort of upgrading and porting the code across implementations and even platforms. If you cannot design working software, that's not a problem you can pin on the tools you don't know how to use.
> The point of standardized components is to provide reusable elements that can be used across all platforms and implementations, thus saving on the development effort of upgrading and porting the code across implementations and even platforms.
It's a shame that C++'s "standardized" components ARE COMPLETELY DIFFERENT on different platforms.
Some of the C++ standard requires per-platform implementation work. For example std::thread on Linux and Windows obviously must have a different implementation. However a super majority of the standard API is just vanilla C++ code. For example std::vector or std::unordered_map. The fact that the standard defines a spec which is then implemented numerous times is absurd, stupid, and bad. The specs are simultaneously over-constrained and under-constrained. It's a disaster.
It permits implementations to take advantage of target-specific affordances (your thread case is an example) as well as taking different implementation strategies (e.g. the small string optimization is different in libc++ and libstdc++). Also you may use another, independent standard library because you prefer its implementation decisions. Meanwhile they remain compatible at the source level.
Unlike in C, in C++ it is not possible to use an independent implementation of the standard library.
Clang is compatible with GCC's standard library/libstdc++ and MSVC's standard library because the clang compiler explicitly supports them, but it's not possible to use clang's standard library with GCC in a standard conforming way or interchange GCC's with MSVC's standard library.
There are some hacks that let you use some parts of libc++ with GCC by using the nostdlib flag, but this disables a lot of C++ functionality such as exception handling, RTTI, type traits. These features are in turn used by things like std::vector, std::map, etc... so you won't be able to use those classes either, and so on so forth...
> but it's not possible to use clang's standard library with GCC
Of course it is. Both libc++ and GCC do make the effort to keep that compatibility going.
> in a standard conforming way
What is that supposed to mean here? GCC doesn't let you simply specify -stdlib=libc++ but while it's unfortunate it just means that you have to use -nostdlib++ and add the libc++ include and linking flags manually.
> There are some hacks that let you use some parts of libc++ with GCC by using the nostdlib flag, but this disables a lot of C++ functionality such as exception handling, RTTI, type traits. These features are in turn used by things like std::vector, std::map, etc... so you won't be able to use those classes either, and so on so forth...
-nostdlib++ is not a hack but the documented way for using a different standard library implementation. This doesn't prevent you from using exceptions and other runtime functionality.
Which musl issues are to do with GCC? Alternate C libraries are common on Linux e.g. uclibc, dietlibc, bionic... Not to mention also the other OSs GCC runs on that don't use glibc. Of course, mixing C libraries between the main executable and libraries probably won't work.
As there are common people on forums searching for problems with their C code crashing and burning, because while those libraries conform to ISO C standard, their implementation defined semantics aren't the same.
But the original post's meaning of "independent" runtime library was "compiler independent" (vs C++). Not that there was no difference between the C libraries.
This makes no sense. As I said the original poster was saying that g++ and libstdc++ are tangled together and it is not possible to use g++ with another implementation of the C++ libraries. But you can use gcc with other C libraries. As proven by gcc running on other OSs with non glibc libraries.
If anything isn't "independent" here (using your definition) it's the apps that rely on implementation detail - not the C library, the C programming language, or the compiler.
> Unlike in C, in C++ it is not possible to use an independent implementation of the standard library.
It sure is, and is pretty easy to do. I know companies using EASTL, libcu++, and HPX, as well as more who use Folly and Abseil which have alternate implementations to much of the standard library.
These days many languages have a single implementation so some people whose experience is only in those environment complain that C++ is “unnecessary complicated to use”. But a lot of that flexibility and back compatibility is what allows these multiple implementations to thrive while representing different points in the design/feature space.
None of those are implementations of the C++ standard library. None of them even live in the same namespace as the standard library so your claim that they remain compatible at the source level is nonsense. Just a simple Google search would reveal that you are wrong about this and what's worse is that you place the burden on me to have to disprove your wrong assertions as opposed to providing references that justify your position:
"It complements (as opposed to competing against) offerings such as Boost and of course std. In fact, we embark on defining our own component only when something we need is either not available, or does not meet the needed performance profile."
"Abseil is an open-source collection of C++ library code designed to augment the C++ standard library. Abseil is not meant to be a competitor to the standard library"
> Just a simple Google search would reveal that you are wrong about this and what's worse is that you place the burden on me to have to disprove your wrong assertions as opposed to providing references that justify your position:
How rude. Have you used any of those libraries or perhaps relied on Google's "AI"-generated answer?
> None of those are implementations of the C++ standard library.
HPX, EASTL specifically are, the latter being heavily used in (unsurprisingly) the gaming comunity).
libcu++ is for your target compilation.
As for Folly and Abseil, I wrote "as well as more who use Folly and Abseil which have alternate implementations to much of the standard library" (i.e. not ful drop in replacements).
So really I don't know what your point is: you made an assertion, then replied not to what I actually wrote but by simply doing a quick google search and using that as your conclusion.
I have used all of HPX, Folly and Abseil but I guess the top of a google search result is more authoritative.
My point is simple, you are poorly informed on this topic and should refrain from speaking about it.
None of the libraries you listed are independent implementations of the standard library let alone source compatible.
EASTL does not claim to be an implementation of the C++ standard library, its claim is that it is an alternative to the C++ standard library. Perhaps the distinction is too subtle for you to have actually understood it but one thing is obvious, you have clearly never used it.
> as well as taking different implementation strategies (e.g. the small string optimization is different in libc++ and libstdc++
As a user this is really not a good thing when the stdlib is tied to the platform. In the end the only sane thing to do if you want any reproducibility across operating systems is to exclusively use libc++ everywhere.
>In the end the only sane thing to do if you want any reproducibility across operating systems is to exclusively use ...
no std library code in portable code. FTFY.
Ofc there are no absolutes. In gamedev essential type info bits and intrinsics are either allowed back or wrapped. Algorithm library is another bit allowed for the most part ( no unstable sort and such ).
I know your approach of 'one ring-libc++ to rule them all' is much more popular in community but gamedevs needed cross platform code for long time. It always had been working well regardless of the opinions.
Just want to clarify what I think you meant: no std library code in portable binaries, something I agree with 100%.
If you distribute in source, I believe almost always the opposite is true: relying on the standard library is probably a win. Not every time: some complex code bases have nonstandard requirements or can benefit from nonstandard code. Gaming is a good example of this.
I am not sure I understand. You declare a win with no explanations or reasons.
What is so beneficial in having different implementation of same functionality? Why source being available makes difference?
We can ignore cases of optimized per platform implementations because std was not made for that. None of platforms that make sense to support now were available back when current ABI was set in stone.
Because the library is a spec, they don't have to be compatible at a binary level. This is obvious on a hardware architecture level (a 16-bit processor vs a 64-bit processor) but it's true even on the same hardware under the same OS (the different representations of std::string being a famous example). But they are all compatible at the API level -- which in C++ is the source level.
So this is why I mention compatibility at the source code level.
> What is so beneficial in having different implementation of same functionality? We can ignore cases of optimized per platform implementations because std was not made for that.
The point of having different implementations is that not every program has the same needs. The idea of a standard library is that it has a bunch of commonly-used functionality that you can just use so you can concentrate on your own program. You don't have to roll your own string class -- just use the standard one. You may later find you have special needs and need to roll your own, more restrictive string that does just what you want, but in most cases people won't have to. (You can argue whether the C++ standard library accomplishes this or not, but that's a separate matter).
A good example of this is std::map which was overspecified, and thus almost never what you want. If the specification had been looser than different implementations could have chosen different solutions, and even borrowed from each other.
And per-platform optimization is exactly part of std's requirements. Different implementations for the 16-bit and 64-bit cases is an easy to use example. The compilers output a lot of intrinsics to take advantage of CPU capabilities (a common example is memcpy, but there are many). Just try to read the source code for libc++ -- it's hard to read when you aren't familiar with it because it's 1 - full of corner cases so that it works with any code that uses it but also 2 - it's full of target-specific optimizations and special cases.
> None of platforms that make sense to support now were available back when current ABI was set in stone.
Well this is true, but since it's a spec it remains source-code compatible.
I don't know if your use of "ABI" was a typo for "API" or if you really meant "ABI". The APIs are set in stone because of back compatibility (like the notorious std::map example I call out above) but use of ABI, when used by the committee, refers to the de facto binary layout of code that's already been compiled. There are few platform ABI specifications for C++; they are mainly for C, with sometimes some Ada or FORTRAN calling convention stuff. Rarely do platforms say anything about C++, and when they do they don't specify much. There is also a little ABI requirement in C++ (e.g. address of a derived class must also be the address of its own most fundamental base class) but that's for something that is reflected at the source code level.
>> What is so beneficial in having different implementation of same functionality?
> The point of having different implementations is that not every program has the same needs.
Are we still chatting about porting exactly same game product to multiple platforms? Portable code means it performs exactly the same function in an app.
It is clear that we are talking past each other. I will leave you to it.
If you want to support different implementation strategies it needs to be far more piecemeal. Not all or nothing. I mean there's only 3 meaningful implementation - libstdc++, libc++, and MSVC. And they aren't wholly interchangeable!
Quite frankly if you value trying different implementation strategies then the C++ model is a complete and total failure. A successful model would have many, many different implementations of different components. The fact there are just 3 is an objective failure.
See my parallel reply: there are much more than just three, and all work with the three/four most dominant compilers these days as well as less dominant ones like EDG or Intel.
No, there are just 3 relevant standard implementations. There are numerous independent libraries that perform very similar but non-conformant functionality.
My complaint is that the C++ standards committee should, when possible, release code and not a specification. They shouldn't release a std::map spec that 3 different vendors implement. The committee should write and release a single std::map implementation. It's just vanilla C++ after all.
My proposal does not prohibit Abseil, Folly, etc from releasing their own version of map which may, and likely will, choose different constraints and trade-offs.
Rust's standard library is not a spec, it's just code. There are many, many, many crates the implement the same APIs with different implementations and behavior. Sometimes those crates even get promoted and become the standard implementation. This is, imho, a far superior approach than the C++ specification approach.
> Rust's standard library is not a spec, it's just code.
I consider this profound weakness, not a strength.
Don’t get me wrong: I recognize the benefits in the short term! But really long lived languages like FORTRAN, Lisp, C++ have benefited hugely from a spec-based standards approach adopted from other engineering practice. They have also benefited from cross-fertilization from different implementations which influenced later standard and thus each other.
This is why standards from building codes to electrical systems, to ships, manufacturing QC sampling, TCP/IP (and all the internet RFCs) and basically the entire corpus of ISO standards are spec based.
If you want to build long-lived engineered systems it’s worth learning from people who figured out a lot of the metaprocesses the hard way, some of them for more than a century ago.
The fact that some things benefit from a spec does not mean that all do things do. Almost everything defined by the C++ committee since 2014 is awful. The specs, once published, are unable to evolve due to ABI.
The Rust standard library is soooooooo much better than C++’s. By leaps and bounds. And it continues to improve with time. C++ is far worse and far more stagnant. That’s lose/lose!
I don’t see how you could possibly claim that std::map and std::deque being a spec is a profound strength.
The fact that you celebrate non-spec implementations such as Abseil and Folly seem to me to be evidence supporting implementations over specs!
To be clear I’m talking about the standard library, not the core language syntax.
> Computer programs that other computer programs use all require a detailed functional spec.
And yet most programs that are used by other programs do not provide a detailed functional spec! How curious.
Most computer programs do not a formal, detailed, functional spec. They simply are what they are. Furthermore, the type of specs we are talking about are incomplete and purposefully leave a lot of room open to implementers to make different choices. Their choices are unspecified but fully relied upon.
Hyrum's Law: With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.
std::deque has an underdefined spec such that the MSVC implementation meets the spec but is utterly worthless. And it can't be fixed because that would break the ABI.
In this thread I'm specifically talking about the C++ standard library specification and implementations. Whether other software benefits from a detailed spec or not is outside the scope of this conversation. I maintain that the C++ standards committee should provide a std::deque implementation and not a spec. Thus far no one has even attempted to argue why it's better as a spec. Womp womp.
C++'s approach also doesn't stop alternative approaches implemented in third-party libraries and sometimes those also do get added to the C++ standard library spec (many standard APIs are very close to pre-existing Boost APIs, for better or worse).
Since most standard library implementations are open source you CAN also pick components individually even if it takes a bit more effort to get all the required support cruft and avoid namespace clashes.
std::deque, a container with some quite useful theoretical properties, is completely unusable because the node size is not specifiable by the user and MSVC chose 16 bytes (I think, insanely small nonetheless).
I don't feel like this article illuminates anything about how noexcept works. The asm diff at the end suggests _there is no difference_ in the emitted code. I plugged it into godbolt myself and see absolutely no difference. https://godbolt.org/z/jdro5jdnG
It seems the selected example function may not be exercising noexcept. I suppose the assumption is that operator[] is something that can throw, but ... perhaps the machinery lives outside the function (so should really examine function calls), or is never emitted without a try/catch, or operator[] (though not marked noexcept...) doesn't throw b/c OOB is undefined behavior, or ... ?
You can't just look at the codegen of the function itself, you also have to consider the metadata, and the overhead of processing any metadata
Specifically here (as I said in other comments) where it goes from complicated/quality of implementation issue to "shit this is complicated" is when you consider inlining. If noexcept inhibits inlining in any conceivable circumstances then it's having a dramatic (slightly indirect) impact on performance
> I don't feel like this article illuminates anything about how noexcept works. The asm diff at the end suggests _there is no difference_ in the emitted code.
You are absolutely correct. The OP is basically testing the hypothesis "Wrapping a function in `noexcept` will magically make it faster," which is (1) nonsense to anyone who knows how C++ works, and also (2) trivially easy to falsify, because all you have to do is look at the compiled code. Same codegen? Then it's not going to be faster (or slower). You needn't spend all those CPU cycles to find out what you already know by looking.
There has been a fair bit of literature written on the performance of exceptions and noexcept, but OP isn't contributing anything with this particular post.
The second one is much more interesting, because it shows where `noexcept` can actually have an effect on codegen in the core language. TLDR, it can matter on functions that the compiler can't inline, such as when crossing ABI boundaries or when (as in this case) it's an indirect call through a function pointer.
https://quuxplusone.github.io/blog/2022/07/30/type-erased-in...
I would like to have seen a comparison that actually includes -fno-exceptions, rather than just noexcept. My assumption is that to get a consistent gain from noexcept, you would need every function called to be explicitly noexcept, because a bunch of the cost of exceptions is code size and state required to support unwinding. So if the performance cost exception handling is causing is due to that, then if _anything_ can cause an exception (or I guess more accurately unless every opaque call is explicitly indicated to not cause an exception) then that overhead remains.
That said, I'm still confused by the perf results of the article, especially the perlin noise vs MSVC one. It's sufficiently weird outlier that it makes me wonder if something in the compiler has a noexcept path that adds checks that aren't usually on (i.e imagine the code has a "debug" mode that did bounds checks or something, but the function resolution you hit in the noexcept path always does the bounds check - I'm really not sure exactly how you'd get that to happen, but "non-default path was not benchmarked" is not exactly an uncommon occurrence)
Even a speedup of around 1% (if it is consistent and in a carefully controlled experiment) is significant for many workloads, if the workload is big enough.
The OP has this as in the fuzz, which it may be for that particular workload. But across a giant distributed system like youtube or Google search, it is a real gain.
programs can be quite sensitive to how code is laid out because of cache line alignment, cache conflicts etc.
So random changes can have a surprising impact.
There was a paper a couple of years ago explaining this and how to measure compiler optimizations more reliably. Sadly, I do not recall the title/author.
I can't find the explanation on why noexcept could hurt performance. One reason I can see itt is some containers like unordered_map can inline the hash along with the key with noexcept, which may not worth additional memory overhead if the hashing is relatively cheap. He talks a bit about it in "Intel+Windows+MSVC" but not much info. I wish there was
noexcept helps in some cases that author doesn't seem to be using and any performance gain or loss is basically due to some (unrelated?) optimization decisions the compiler takes differently in noexcept builds if I am understanding correctly?
This is super unrelated to the optimisation, and is just related to the cmake setup - instead of common.hpp having
#ifdef USE_NOEXCEPT
#define NOEXCEPT noexcept
#else
#define NOEXCEPT
#endif
and cmake being:
if (WITH_NOEXCEPT)
message(STATUS "Using `noexcept` annotations (faster?)")
target_compile_definitions(PSRayTracing_StaticLibrary PUBLIC USE_NOEXCEPT)
else()
message(STATUS "Turned off use of `noexcept` (slower?)")
endif()
, the cmake could just be:
if (WITH_NOEXCEPT)
message(STATUS "Using `noexcept` annotations (faster?)")
target_compile_definitions(PSRayTracing_StaticLibrary PUBLIC USE_NOEXCEPT=noexcept)
else()
message(STATUS "Turned off use of `noexcept` (slower?)") target_compile_definitions(PSRayTracing_StaticLibrary PUBLIC USE_NOEXCEPT=)
endif()
No need for these shared "common" config headers.
Back on topic, this doesn't surprise me. There's this idea that C++ is fast, and that people who work with C++ are focused on optimisation and in my experience there's as many of these theoretical ideas about performance which aren't backed up by numbers, but are now ingrained in people. See https://news.ycombinator.com/item?id=41095814 from last week for another example of dogmatic guidelines having the wrong impact.
There's a lot of mysticism and superstition surrounding C++ exceptions. It's instructive to sit down with godbolt and examine specific scenarios in which noexcept (or exceptions generally) can affect performance. Read the machine code. Understand why the compiler does what it does. Don't want to invest at that level? You probably want to use a higher level language.
You won't get a sense of how bad exceptions can be by using Godbolt. A lot of the magic of exceptions is handled behind the scenes by the compiler and/or the Itanium ABI. For example one disasterous consequence of using exceptions in GCC is that there is a global application wide lock used to manage stack unwinding.
This means that only one single thread can unwind a stack at a time and the lock is held from the start of the exception being thrown until the very last destructor is called. If you have a multicore server with 100 threads, and one of those threads throws an exception, you better hope that no other thread throws an exception because even if those two threads are entirely independent of one another, one of them will block.
> For example one disasterous consequence of using exceptions in GCC is that there is a global application wide lock used to manage stack unwinding.
This might have been (partially?) fixed? GCC Bug 71744 "Concurrently throwing exceptions is not scalable" is marked "RESOLVED FIXED" [0], and commit 6e80a1d164d1 in particular looks interesting:
> eliminate mutex in fast path of __register_frame
>
> <snip>
>
> This commit eliminates both the mutex and the sorted list from the atomic fast path, and replaces it with a btree that uses optimistic lock coupling during lookup. This allows for fully parallel unwinding and is essential to scale exception handling to large core counts.
I'm not particularly familiar with the unwinding machinery though so I don't know if the issue is fully resolved.
No one ever claimed that throwing exceptions is fast and quite honestly you should not care about the performance of that too much and instead make sure that exceptions are supposed to be exceptional.
What is interesting though is the impact of exceptions when they are not being thrown. This includes setup code (mostly avoided by good exception handling implementations) but also inhbited optimization opportunities and executable size bloat.
Or set the compiler flag -fno-exceptions and ban the use of exceptions. While it isn’t standard-compliant, a surprisingly large number of companies and projects follow these practices.
No, noexcept confusingly doesn't mean "does not throw exceptions" in that sense. There is no constraint that says you can only call noexcept code from noexcept code - quite the opposite. Noexcept puts NO constraints on the code.
All noexcept does is catch any exception and immediately std::terminate. Confusingly this means that noexcept should really be called deathexcept, since any exception thrown within kills the program.
> All noexcept does is catch any exception and immediately std::terminate.
While that's a possible implementation; the standard is a bit more relaxed: `noexcept` may also call `std::terminate` immediately when an exception is thrown, without calling destructors in the usual way a catch block would do.
More than an optimization is a different exception handling philosophy.
AFAIK itanium ABI exception handling requires two phase unwinding: first the stack is traversed looking for a valid landing pad: if it succeeds then the stack is traversed again calling all destructors. If it fails it calls std terminate. This is actually slower as it need to traverse twice, but the big advantage is that if the program would abort, the state of the program is preserved in the core file. This is easily generalized with noexcept functions: no unwind info is generated for those, so unwind always fail.
MSVC used to do one pass unwind, but I thought they changed it when they implemented table based unwind for x64.
> All noexcept does is catch any exception and immediately std::terminate.
I don't think this is a decent interpretation of what no except does. It misses the whole point of this feature, and confuses a failsafe with the point of using it.
The whole point of noexcept is to tell the compiler that the function does not throw exceptions. This allows the compiler to apply optimizations, such as not needing to track down the necessary info to unwind the call stack when an exception is thrown. Some containers are also designed to only invoke move constructors if they are noexcept and otherwise will copy values around.
As the compiler omits the info required to recover from exceptions, if one is indeed thrown and bubbles up to the noexcept function then it's not possible to do the necessary janitorial work. Therefore, std::terminate is called instead.
Except the compiler still needs to ensure that if an exception bubbles through a noexcept function, std::terminate will be called, even if the exception were caught at some level above. So it still needs to keep track of the noexcept frames, even when inlining the noexcept function.
Generally what a language feature is intended to do is less relevant than what it actually does, over time. Just like the "inline" keyword was intended as a compiler hint, but what it actually does is change the visibility of symbols across compilation units, and alter the rules for static variables.
Of course, noexcept isn't as useless as inline, yet. There are real uses of it as a hint in template metaprogramming, as you said.
> Except the compiler still needs to ensure that if an exception bubbles through a noexcept function, std::terminate will be called, even if the exception were caught at some level above.
That's the failsafe I mentioned.
> Generally what a language feature is intended to do is less relevant than what it actually does, over time.
Not true, and again this take misses the whole point. The point of noexcept is to declare that a function does not throw exceptions. If a function is declared as noexcept, the whole point is that it does not throw exceptions. How the runtime handles a violation of this contract is something that only expresses in a bug. The same applies for exceptions thrown in destructors. It makes no sense to describe destructors as something that calls std::terminate .
Also, of course destructors are not "something that calls std::terminate", because they do many other things.
For function definitions, noexcept is perfectly equivalent to wrapping the body of your function in try {} catch (...) {std::terminate();}. In fact, noexcept doesn't even add any extra optimization options for the compiler: the compiler can already use the presence of such a block to decide that your function can't throw exceptions and optimize accordingly. Whether it can also ellide this block is a different matter, and noexcept doesn't help with that. It's just a nifty piece of syntax instead of this block.
Conversely, the noexcept declaration on the function doesn't allow the compiler to deduce anything about exceptions not being thrown inside the function, because the standard doesn't forbid throwing exceptions.
The only purpose noexcept serves other than a try/catch block is when used with declarations - there is indeed a difference in how the compiler can optimize a call to a function declared `extern void foo() noexcept;` and a function declared as `extern void foo();`
Panic=abort is more like -fno-exceptions since it applies to all the code being compiled and not just function. Codegen can also take advantage of the fact that it won't have to unwind.
I don't think there is a rust equivalent of noexcept.
The behavior of `-fno-exceptions` isn't standardized, because it's a compiler feature, not a part of the C++ standard. The standard says:
> In the situation where no matching [exception] handler is found, it is implementation-defined whether or not the stack is unwound before std::terminate is invoked. In the situation where the search for a handler encounters the outermost block of a function with a non-throwing exception specification, it is implementation-defined whether the stack is unwound, unwound partially, or not unwound at all before the function std::terminate is invoked.
So, the whole thing is basically implementation-defined (including `-fno-exceptions`, since that is something that implementing compilers provide).
The compiler can tell about the immediate function, but not any functions it calls.
If a function marked noexcept calls a function that throws an exception, then the program is terminated with an uncaught exception. A called function can throw through a non-noexcept function to a higher-level exception handler no problem.
So in order to avoid changing the semantics of the function, the compiler would have to be able to determine that that transitive closure of called functions dynamically don't throw, and that problem is undecidable, even assuming the requirement that "the compiler can see the source of all those functions" is somehow met, which it won't be.
> A called function can throw through a non-noexcept function to a higher-level exception handler no problem.
This is exactly the problem. To have made this a useful feature, it should have been more restrictive: a noexcept function should not have been allowed to call any function or operator or lambda that is not marked noexcept. Some extra syntax to allow function templates to be made "conditionally noexcept" would have been necessary, but overall the feature would have had a real use and real power to help make code safer, and more performant.
Java has the first part down, for the class of checked exceptions: a function that doesn't throw can't call functions that do (except in try/catch blocks, but that's largely irrelevant). The annoyances come because of the missing second part - the ability to make a generic function that throws for some type parameters, but doesn't for others.
> Java has the first part down, for the class of checked exceptions: a function that doesn't throw can't call functions that do (except in try/catch blocks, but that's largely irrelevant).
That's not actually true, it's possible to do "sneaky throws" (https://www.baeldung.com/java-sneaky-throws) of a checked exception from a method which isn't declared to throw that checked exception. The classic example is Class.newInstance(), which propagates any exception from the called constructor. Other ways are calling code from JVM languages other than Java, which do not have the same "checked exception" concept (like Kotlin: see https://kotlinlang.org/docs/exceptions.html and https://kotlinlang.org/docs/java-to-kotlin-interop.html#chec...), generics trickery to confuse the compiler, and manually creating JVM bytecode.
Sure, but that's essentially the same as using explicit jumps through raw assembly instructions to go around C++'s destructor guarantees. That is, when your Java process runs non-Java code, of course this can defeat certain Java guarantees. No programming language can make promises for semantics of external code like this.
That's still the case in pure Java code. The "Class.newInstance()" method is a public method on the core Java API, calling "MyClass.class.newInstance()" is mostly equivalent to "new MyClass()". And the generics trick in the "sneaky throws" article I linked is also pure Java code, without any calls to "sun.misc.Unsafe".
Class.newInstance() is a known unsafe method and has been deprecated for quite some time now (since Java 9). It's similar to Haskell's unsafePerformIO from this point of view.
The generics hole is indeed interesting, but it's ultimately a known limitation of how generics were implemented in Java, the presence of type inference, and the design of the exception hierarchy, than an intentional feature. When inferring the type of T to apply in that example, there is no good unique solution: inferring T = Throwable would have been safer, but it makes many simple cases behave unexpectedly, especially with lambdas. Inferring T = RuntimeException is unexpected and unsafe, but in practice it makes many common cases be way more usable, so a call was made to do it, despite the hole.
C++'s templates wouldn't have a similar problem, as they actually instantiate the definition at compile time and can re-check it. Also, there is no equivalent issue to the ambiguous inference, because C++ doesn't do type inference of this kind at all, and anyway there is no problem of the exception hierarchy. Even if there were, C++ could also take the opposite choice than Java, and explicitly infer the safer option when both `noexcept` and `potentially-throws` were possible.
And of course Lombok is a tool for modifying the compilation of Java, so writing Lombok code is not exactly writing pure Java.
That would have been too limiting since there are many (e.g. C) functions that can never throw but are not marked noexcept. Not being able to mark your function noexcept just because you call some standard math function would be counterproductive.
There were ways around this. C functions in the standard library could easily be marked noexcept, for one. Extern C could also imply noexcept. Also, explicit try/catch blocks could be required when calling a potentially-throwing function from a noexcept function - slightly annoying, but not a huge problem in practice.
Overall this was just another case of the C++ community's preference for speed over safety and robustness. Nothing new, but still a shame to see that the attitude hasn't changed at all.
> There were ways around this. C functions in the standard library could easily be marked noexcept, for one.
C++ standard library function yes, C standard library functions, incloding those imported into C++ are more complicated since on most platforms those are not provided by the C++ implementation but some other lower level library. Third-party libraries are the bigger concern though.
> Extern C could also imply noexcept.
In theory this might be correct but in practice this will cause problems with C functions that take callbacks which the user might want to throw from. These are already problematic especially since the C code won't unwind anything but you'd still need to concern yourself with potentially breaking user code.
> Also, explicit try/catch blocks could be required when calling a potentially-throwing function from a noexcept function - slightly annoying, but not a huge problem in practice.
Yes, but avoiding that is the whole point. The try-cach for potentially when you call potentially throwing functions is already implicitly provided by current noexcept and if your noexcept function only calls only noexcept functions then the compiler can already elide all that.
> Overall this was just another case of the C++ community's preference for speed over safety and robustness.
Not at all. If that was the case then noexcept would only be a compiler hint and exceptions bubbling up to noexcept functions would be undefined behavior. Arguably that would be better than what we have now.
> C++ standard library function yes, C standard library functions, incloding those imported into C++ are more complicated since on most platforms those are not provided by the C++ implementation but some other lower level library. Third-party libraries are the bigger concern though.
Since noexcept is a compilation-level hint, it only matters that the declaration contains them in the <cstdXX> headers, not who provides the implementations. And since C functions can't deal with exceptions thrown from callbacks, it would make sense to annotate any function pointer that you provide with noexcept as well.
> Yes, but avoiding that is the whole point. The try-cach for potentially when you call potentially throwing functions is already implicitly provided by current noexcept and if your noexcept function only calls only noexcept functions then the compiler can already elide all that.
The current implementation calls std::terminate() if an exception is caught. That needn't be the case in practice: you could well handle the exception in a meaningful way and continue execution. Also, having the programmer explicitly do this is valuable in itself, even if they also just call std::terminate(). For example, with the way noexcept is implemented today, you have no notification if a function you're calling goes from noexcept to potentially throwing until your program actually crashes. If noexcept was an actual compiler-checked keyword, your code would stop compiling if one of the functions you rely on started throwing exceptions, and you could decide what to do (maybe catch it, maybe find an alternative, etc).
> Not at all. If that was the case then noexcept would only be a compiler hint and exceptions bubbling up to noexcept functions would be undefined behavior. Arguably that would be better than what we have now.
Crashing the program is of course better than UB. But it's still the worse possible thing to happen other than that. It basically makes noexcept functions be the most scary functions to ever call, instead of being the safest. And if you want to write a piece of code that can never throw an exception, the compiler still won't help you in any way to do that, it's up to you to do it correctly or crash.
No, we compile in bottom up order, starting with leaf functions, and collecting information about functions as we go. So "not throwing" sort of trickles up when possible to a certain degree.
In LTCG (MSVC)/O3 (GCC/Clang) there are prepasses over the entire callgraph to collect this order
No, it can't do that. My speculation is that likely it is so because in general case this might be a NP hard problem similar to the halting problem https://en.wikipedia.org/wiki/Halting_problem.
I think their point was that the halting problem is NP hard, it is in fact undecidable. Since there is no algorithm that can solve it, there is no point in talking about complexity of that algorithm.
Alternatively, Java shows that it is very much possible to do this - the compiler can enforce that a function can't throw exceptions (limited to checked exceptions in Java, but that is besides the point). The way to do it is easy: just check that it doesn't call throw directly, and that all of the functions it calls are also marked noexcept. No need to explore things any deeper than that.
Of course, the designers of C++ didn't want to impose this restriction for various reasons.
Determining if a function throws is a pretty basic bit of information collected in bottom up codegen (or during pre pass of a whole program optimization) and in no sense NP hard. Compilers have been doing it for decades and it’s useful
Noexcept on the surface is useful, except for the terminate guarantee, which requires a ton of work to avoid metadata size growth and hurts inlining. If violations of noexcept were UB and it was a pure optimization hint the world would be much better
Interestingly, AUTOSAR C++14 Guidance (https://www.autosar.org/fileadmin/standards/R22-11/AP/AUTOSA...) had a "Rule A15-4-4 (required, implementation, automated) A declaration of non-throwing function shall contain noexcept specification." which was thankfully removed in MISRA C++2023 (the latest guidance for C++17, can't give a link, it's a paid document), - it mandates it only for a few special functions (destructors, move constructors/assignments and a few more).
Yes, true, thanks. I confused "will it throw given inputs&state?" with "can it potentially throw?".
I wonder, why compilers don't expose that information? Some operator returning tri-state "this code provably doesn't throw | could throw | can't see, break the compilation" could help writing generic code immensely. Instead we have to resolve to multi-storey noexcept() operator inside a noexcept qualifier which is very detrimental for the code readability...
You very likely can't always answer the question "will this function throw?", but it should be relatively easy to identify the subset of function that recursively call functions that are guaranteed not to throw. That's only a subset of all non-throwing functions of course.
Yes, my mistake was in confusion a run-time question "will it throw?" with "can it throw in theory?". The latter if I'm not mistaken again only requires a throw statement somewhere in a non-dead code, which is totally possible to find out for a code compiler could see.
Is there any compiler option to have it yell at you if you mark something that can throw as `noexcept`, which seems to be the cause of (at least some of) the slowdowns where the compiler is forced to accommodate with `std::terminate`? I feel like these situations are more commonly mistakes, and not the user wanting to "collapse" exceptions into terminations. So the current approach to dealing with these cases seems to be suboptimal not only from a performance perspective, but a behavior perspective as well.
No, calling throw in a noexcept function is a defined behavior (call std::terminate), and that behavior is not a diagnostic
I think maybe WG21 was concerned a compiler engineer would be clever if throwing in noexcept were UB, for example and assume any block that throws is unreachable and could just be removed along with all blocks it postdominates. Compiler guys love optimizations that just remove code. The fastest and smallest code is code that can’t run and doesn’t exist
Is this some kind of new clickbait title? Something "can" "sometimes" do something (already 0 information) - ooor sometimes it also does the opposite. The only possibility not allowed is that it would not make any difference, but this one is actually also possible. Sigh.
I work on MSVC backend. I argued pretty strenuously at the time that noexcept was costly and being marketed incorrectly. Perhaps the costs are worth it, but none the less there is a cost
The reason is simple: there is a guarantee here that noexcept functions don't throw. std::terminate has to be called. That has to be implemented. There is some cost to that - conceptually every noexcept function (or worse, every call to a noexcept function) is surrounded by a giant try/catch(...) block.
Yes there are optimizations here. But it's still not free
Less obvious; how does inlining work? What happens if you inline a noexcept function into a function that allows exceptions? Do we now have "regions" of noexceptness inside that function (answer: yes). How do you implement that? Again, this is implementable, but this is even harder than the whole function case, and a naive/early implementation might prohibit inlining across degrees of noexcept-ness to be correct/as-if. And guess what, this is what early versions of MSVC did, and this was our biggest problem: a problem which grew release after release as noexcept permeated the standard library.
Anyway. My point is, we need more backend compiler engineers on WG21 and not just front end, library, and language lawyer guys.
I argued then that if instead noexcept violations were undefined, we could ignore all this, and instead just treat it as the pure optimization it was being marketed as (ie, help prove a region can't throw, so we can elide entire try/catch blocks etc). The reaction to my suggestion was not positive.