Perhaps even more relevant, it's already the name of a persistence library for Lua. Probably unmaintained at this point, but it's been around for decades (version 1.2 was released in 2004). Unless names are invented words, collisions are inevitable at this point.
If I mostly want to keep my own document repository (pdf, word, etc) with search ability over all included documents and, maybe, the ability to do notations, is this the right tool? If not, any suggestions?
Yes, this can be used as a document manager for those purposes, with selected plugins (which ones are based on preference; the list is not too long [1]). It now has the ability open and annotate PDFs, although for a long time it did not. There are additional notetaking plugins.
It doesn't have all the features of traditional document managers, but it will be far better for handling annotations.
I don't know much about JabRef itself, but a quick search seems to indicate that one of the key differences is word document integration - JabRef is focused on bibtex.
One thing that surprised me a bit was that exceptions are more or less recommended in these guidelines. In the section "4.18 Exception handling", it is mentioned that "most compilers provide options (such as -fno-exceptions) that can be used to disable exceptions in order to eliminate the code and size overheads". Then it goes on describing how this makes it impossible to comply with a bunch of other rules.
Since these are "guidelines for the use of C++17 in critical systems", I would have expected it to prohibit exceptions due to their non-determinstic nature. On a side note, dynamic memory is prohibited (rule 21.6.1).
I haven't read the earlier MISRA C++ guidelines so I don't know if this have changed.
Exceptions are not non-deterministic, although the control flow can be a bit non obvious of course.
I think if you disallow exceptions you run into other problems. How can a constructor fail now? You need to have some kind of flag showing if an object is fully constructed. But then that goes against the idea to "make illegal states unrepresentable".
I do wish C++ had more tools to reign in exceptions. Maybe an "onlythrows X" annotation that says only these very specific exceptions may escape from a block, and the checker will complain if it cannot prove that only X can be thrown. The opposite of checked exceptions basically.
The constructor error problem is easily solved by using factory functions and two phase construction. The problem is that the standard library is relying on exceptions quite a bit and major parts become unusable.
> If the copy constructor can fail and you don't want that, then delete it?
You're trivialising just how deeply embedded exceptions are into the design of the language.
I gave just one example and it was not meant to be exhaustive, just one 'gotcha' that you won't find out till runtime and your program starts (worst case) giving you slightly incorrect results without you knowing about it...
So, yeah, if you want to do without exceptions (without having your program execute random code) you need to know in advance what special cases to handle, like unintended copy construction, or failures in overloaded operators, or which std libs can be used and which cannot, or which C++ libraries can be linked, and which cannot.
All of which is perfectly possible, but taken together is hardly "easy". It's tedious, error-prone, bloated ... but hardly what someone would call "easy".
C++ is complicated, I get it. Things that are "easy" in other languages are "hard" in C++. That doesn't mean that writing C++ code that can't throw isn't something that tens of thousands of engineers
are doing every day. One could argue that all of C++ is tedious, bloated and error-prone.
I don't think this is true, but I don't know this for sure.
From what I have read it seems that 'only' Bloomberg, Meta and Microsoft that uses c++ with exceptions.
And since both microsoft and meta are adopting rust in their services it seems to me that they are looking for another language than C++. (why else adopt a new language?)
LLVM has been for quite some time driven by Apple and Google.
WebKit, another Apple child.
Qt, it has to support environments where exceptions are not allowed, otherwise they would be losing customers, specially since Qt is older than C++98.
gcc, was initially written in C, and for quite long time had a mixed code base with minimal C++.
The companies adopting Rust aren't doing so because of lack of exceptions, they would still adopt Rust if the language had exceptions support (which panic and std::ops::Try kind of are), rather due to the type safety that C and C++ aren't able to provide.
You would be surprised how many games actually do support exceptions.
Handling constructor failure is one of the least valuable use cases for exceptions. Idioms for exception-free construction are straightforward and some cases will require these idioms even with exceptions.
Resource exhaustion or hardware failures are the more straightforward use cases for exceptions, but doing anything clever in those cases requires writing similar handling code as you would without exceptions.
Maintaining state invariants is trivial without exceptions thrown from constructors. Just make constructors private and write a public static factory method. Allowing exceptions in constructors on the other hand creates the problem of: what should the destructor of an only partially constructed object do? It’s just unnecessary.
I think there is a misconception on your part. Formally, there is no "half-constructed" object, at least not in any way that's missing an automatic mechanism to unwind mid-way (i.e. half-deconstruct).
Each object construction is a list of N + 1 construction stages -- constructing the N subobjects (implicitly or as per the initializer list), followed by the constructor body.
The destructor has N + 1 stages too, those match exactly the constructor's stages. If there is an exception happening in any stage of the constructor, say stage E, naturally only stages 0 to E-1 get un-done (in reverse), but not E nor any later stage.
So what you have to do is imagine that all sub-objects are already constructed, and imagine there is a function that runs the constructor body and destructor body in a sequence. The constructor part could throw an exception, causing the destructor part to never run. Like any other function, it should be possible to run it without leaking anything if an exception happens. Make it "exception safe" using RAII or by being extra careful.
If you've written constructor and destructor such that they match in this way (which is, again, like you would write any other function), then will work correctly in all cases. This is a powerful concept and pretty much fool-proof. I say that as someone who has lots of concerns about the language's complexity -- including exceptions.
If you call mmap/VirtualAlloc/open/fopen in your constructor and later it throws, you will have a resource leak, because the destructor won’t clean it up.
Again, you need to make the constructor exception safe, that's just like any other function. Just imagine the constructor and destructor bodies as one combined function. (Of course don't forget the method calls in between but those should preserve the class invariant).
If you call mmap/VirtualAlloc/open/fopen in your constructor and later it throws, you will have a resource leak, because the destructor won’t clean it up.
In the domain where Misra is applied, resource exhaustion and hardware failures are totally valid scenarios that need to be processed in the same way as any other error.
Prohibiting exceptions is a toxic antipattern. Once you have more than one thread you want to propagate fatal errors in a sane way. ("Just crash the whole program, #yolo" is not a sane way.)
Exceptions are not a sane way to handle many error conditions either. Predictably and efficiently handling fatal error conditions will require custom handling code regardless. For many types of software, the downsides of exceptions are not offset by a corresponding benefit in real reliable systems. I’ve worked on systems that work both ways and I have a hard time recommending exception use.
On the other hand, once your embedded system is sufficiently large, people will want to use (and inevitably will use at some point) standard containers such as std::string or std::vector. And without exceptions, all of those might invoke UB at any time (I have yet to see a standard library that understands and honors -fno-exceptions; usually they just drop all try-catch blocks and all throws)
Could you elaborate on that? I'd love to see an example of this. Are you saying that even relatively simple code (using eg. std::vector) could easily cause UB if -fno-exceptions is enabled?
Your compiler vendor has to pick a (reasonable) behaviour though and apply it consistently, and while they are not required to document it (IIRC - I think that's just for implementation-defined?) you can probably get them to tell you if you have a good support relationship with them. Or you can just figure out what the compiler does, and hope they don't change the behaviour too much with the next release :-)
Most standard containers have no way to communicate allocation failure to the caller in the absence of exceptions (think of constructors that take a size). Worse, the implementations I’ve seen would eventually call operator new, assuming it would throw if it fails. That is, subsequent code would happily start copying data to the newly created buffer, without any further tests if that buffer is valid. In the absence of exceptions, that won’t work.
I guess my hope was that the program would just terminate immediately the instant it tries to throw an exception while -fno-exceptions is set, thus ideally preventing any further action from the program.
Well, what do you expect std::vector<T>::at() to do if the index is out of bounds and it can't throw exceptions? Or std::vector<T>::push_back() if it can't reallocate to a larger size?
These are just some obvious cases. Not to mention that any use of operator new is UB if memory allocation fails and the system can't throw exceptions.
In principle, I would agree with you, but the biggest problem is that the whole C++ ecosystem works the opposite way.
The main reason people use C++ over safer languages like Java is performance (memory, CPU speed, real-time constarints etc). And C++ the language is designed for performance, but only with an expectation of a very powerful optimizing compiler. Most C++ std classes are extraordinarily slow and inefficient if compiled without optimizations - certainly much slower than Java for example.
So, C++ is not really C++ without aggressive optimizing compilers. And one of the biggest tools that compiler writers have found to squeeze performance out of C++ code is relying on UB not to happen. That essentially gives the optimizer some ability to reason locally about global behavior: "if that value were nullptr, this would be UB, so that value can't be nullptr so this check is not necessary". And this often extends to the well defined semantics of standard library classes outside their actual implementation - which rely on exceptions.
So, to get defined behavior out of the std classes in the absence of exceptions, either you disable many optimizations entirely, or you carefully write the optimizer to have different logic based on the no-exceptions flag. But, all C++ comittee members and C++ compiler writers believe exceptions are The Right Way, for every situation. So getting them to do quite a lot of work to support someone doing the wrong thing would be very hard.
In safety critical embedded systems there is no such thing as "program just terminating". The program is the only software that is running on your device, and you need to degrade execution to some safe state no matter what. Every error should be processed, ideally right where it occurred (so I am not a great fan of exceptions either).
At() with exceptions support is pretty much equivalent with a method returning an Option<T>. More precisely, it gives a superset of the functionality of returning Option<T>. If you declare the call site noexcept(), you should even get some compiler checking to make sure you handle the exception.
> If you declare the call site noexcept(), you should even get some compiler checking to make sure you handle the exception.
What compiler does it? At least g++ does not. It is not what specification dictates either.
I can't see how it is a superset either. If the library returns an Option, the calling code can process it as it please, including throwing an exception. On the other hand, if the library only indicates error by throwing an exception, it cannot work with the caller that is built with exceptions disabled.
Oops, you're right, the whole point of noexcept is to promise to the compiler that you know in practice exceptions can't happen, I got confused...
Otherwise, I should point out I explicitly said "at() with exception support enabled". It's also important the ability to disable exceptions is not a feature of C++, the C++ specs assume exceptions work (just like the Java or C# or Go specs). It is a feature of certain C++ implementations that they suport this mode, just like they support other non-standard features (compiler intrinsics, various #pragmas, etc).
Still even with exception support enabled I can't see what you can do with a function that throws that you cannot do with a function that returns maybe not Option<T>, but Result<T, E> in fewer lines of code.
Disabling exceptions is indeed not in the standard, probably because of Stroustrup's position (I respect many of his opinions, but cannot agree with this one) - but it's what every sane compiler, especially a one targeted at embedded systems, will support. Exceptions are designed for a controlled environment where a program terminating will return to somewhere that will maybe add a line to a logging system and restart it automatically. It only complicates things when terminating is an unacceptable scenario.
Yes, Result<T, E> should be equivalent in power to exceptions (the missing E part is why I was saying it's a superset of Option<T> functionality).
Regarding exceptions being more code, I very much don't agree. Even for embedded apps, the pattern of "if this fails, wind back up to some top level event loop" is quite common, and exceptions give it to you for free if you're also using RAII. In contrast, with Result<T, E> you have to write code at every level of the stack to handle it. Code which gets particularly ugly when you combine it with things like map() or filter().
Prohibiting exceptions is necessary in the embedded space for, err, space and allocation reasons.
Just crash the system and reboot the MCU can make sense depending on the application. And where it can't, you need to take the same kind of care for handling every single problem at the call site, or correctly propagating it to a layer that can handle it.
Exceptions aren't special here, they are simply a way to do error handling and recovery.
It's the kind of rule that doesn't make sense for applications, but when you've got tightly constrained memory limits, it makes sense.
I have personally removed exceptions (to be fair, it was only few) from an embedded application and introduced the -fno-exceptions flag. The binary size was reduced by ~20%, which can be important if you are doing SW updates to space and have a link budget... Also, the reduced code size is more cache friendly on a system with rather limited cache memories.
Well the problem was most likely with the code then, not the language feature -- because the changes are quite localized to throw sites and catch sites, not throughout the code.
There are cases in MISRA's problem domain where the software watchdog is part of the same program, and fully crashing that program is a different, more severe error than alternatives.
It depends on the problem domain. For automotive embedded software? Definitely not. But Google, for instance, bans them in much of its server code under the principle that exceptions should denote circumstances where the entire program cannot recover and when a server node cannot recover, logging noisily and failing fast so that it can be restarted into a working state is preferable to trying to unwind a bad state.
Given that constraint, they conclude that the overhead of maintaining exception unwinding and the non-local control flow aren't worth it.
The classic GSG prohibition on exceptions has more to do with a lack of exception safety in their legacy code base than anything else. Promptly-crash-on-failure can be achieved by adopting a "don't catch exceptions" style, with significant advantages of not throwing away much of the strengths of RAII or needing the evil hack that is two-phase initialization.
You can for all intents and purposes avoid two phase init with factory functions (which is how Go does it) and private constructors.
(To my memory though, Google didn't throw away the benefits of RAII without allowing exceptions... They discouraged complicated behavior in constructors and so the only thing a constructor could fail on was OME, which was supposed to crash anyway).
To quote myself from the other day: "Basically, the government has left to the industry and unions to agree on a lot of stuff that is decided by law in other countries. E.g. Sweden does not have a minimum wage. This system has led to much fewer costly conflicts and strikes compared to many other countries for almost a century."
Counterpoint: if something pretty much works without a law being in place, why write a law?
Laws are never perfect, and writing one just for the sake of it both makes complying with the law more complex and opens opportunities for people with money to create legal moats around interpretations and case law precedence.
> Counterpoint: if something pretty much works without a law being in place, why write a law?
Well, I'll see that and raise you:
If remuneration packages at Tesla (including in Scandinavia) "pretty much works" (in the sense that Tesla are able to attract and retain talent, which they apparently are), why write a collective bargaining agreement?
Because they are working in the short term. Once corporations find a wedge, it's only a matter of time before they exploit it and reduce wages over the long run. I a glad that unions are taking a long view on this matter.
It's essentially mandatory unless you provide better terms on your own. If the unions want a CBA in that case, it doesn't change a single thing. You sign and nothing changes. Tesla claims to provide better than the CBA but openly refuses to sign which doesn't pass the smell test.
Tesla wants to punish anyone who joins a union which is super illegal. We're talking prison time if they don't back down. They can't take away the stock benefit when IF Metall wins. They've put it out there, and if they take it away because of unionisation, they'll lose. It's a huge fucking no.
The collective agreements in Sweden are pretty much a win-win between workers and employers. It is something that most of the industry and the unions care about. Basically, the government has left to the industry and unions to agree on a lot of stuff that is decided by law in other countries. E.g. Sweden does not have a minimum wage. This system has led to much fewer costly conflicts and strikes compared to many other countries for almost a century.
I really love this era of sci-fi, a kind of short window between the immature boy fantasies before and the dull hard SF the came after. The weirdness, the hallucinatory, the craziness! And most importantly, so many great writers.
Doesn't Hard SF both predate and postdate New Wave? It's a thread that weaves through the whole of science-fiction. (heck - didn't Jules Verne once diss H.G. Wells for not being "hard sci-fi enough"?)
To be fair to the crowd that came after, the average consumption of creativity-enhancing substances was a lot less in the 80's and forward than in the [60's, early 80's] period.
If you're seeking weirdness and hallucinatory craziness, I think Peter Watts, Tamsyn Muir, and Charles Stross (among many others) can absolutely deliver.
Another fun offshoot combining New Wave and Cyberpunk is Transrealism. Rudy Rucker has a way with illustrating immense mathematical ideas while keeping the zany hippie dreams alive.
Others have in reply. See mocking bird, almost all are influenced by state.
The other factor is corporate influence. Most of the media is owned by the add dollars that drive it, so it's not the truth, often its what helps or at least doesn't harm the Corp that passes the filters.
https://ecss.nl/standard/ecss-e-st-70-32c-test-and-operation...