Hacker News new | past | comments | ask | show | jobs | submit login

"Out of interest, can you elaborate on what exactly it is with C++ that you think makes it so scary that can't be avoided by adopting certain techniques (RAII for example)?"

The two biggest issues are (1) lots of undefined behavior and lots of complicated semantics and (2) the difficulties involved with handling errors.

The undefined behavior is something that Stroustrup justified by pointing to all the undefined behavior in C. This is a poor response in my view, since there are undefined behaviors in C++ that could have been defined without breaking any compatibility with C; for example, the order in which function arguments are evaluated. There are also things that were left unspecified for no apparent reason, such as a non-void function where some control paths have no return statement (which can be disastrous in C++ and can make debugging a giant pain). When the standard cannot even rule out programs that could never be compiled into something sensible, the standard is deficient (especially when most compilers can issue a warning over this sort of things -- so why not just define it as a compile-time error?).

C++ also forces programmers to deal with complicated semantics as the default. It may not seem like a big deal, but it is easy for a programmer to forget that "1/3" is not "one third." It is also easy for a programmer to forget that "x+1" might actually have a value that is less than "x." The problem here is that C++ makes these things the default, citing "performance;" it would be far better if the default were arbitrary-width types, smart pointer/smart array types, and other high-level types, with low-level types like "32-bit integer" or "pointer to a character array" requiring an explicit cast by the programmer. Critical systems have failed because of obscure "corner case" semantics like integer overflows in the past; these also create vulnerabilities in software.

Error handling is a complete mess in C++. Exceptions are OK, but they cannot be thrown from destructors, they should not be thrown from constructors, and there is no way to say, "retry the operation that threw the exception" -- exceptions destroy the entire stack, and so the client code is responsible for knowing how to retry things. If you are in the middle of writing a record to a thumb drive and that thumb drive is removed, what do you do? You cannot simply prompt the user to reinsert the drive, and then have the "OK" button cause the rest of the record to be written (unless the exception is thrown again because the thumb drive is not there), unless you want your IO library to open dialog windows (and so much for encapsulation). That basically makes exceptions no better than checking return values, other than that exceptions are a little faster and cannot be ignored.

The situation with exceptions is so bad that even the C++ standard library requires some errors to just be ignored. The standard IO classes, for example, are required to silently fail if there is an error closing an underlying file when the object is being destroyed. This forces programs to explicitly invoke a member function to close the stream before destroying the object -- so what is the point of having a destructor at all? Destructors have no good way to signal errors: they cannot throw exceptions, they cannot return an error code, and the order in which destructors are called is undefined so they cannot set a global error flag -- so you can only put things in destructors that either never create errors or which can only create errors that are safe to ignore (can you think of any such errors?).

C++11 could have fixed this error handling problem: they could have required that the stack only be unwound after the catch block ends, which would have both solved the destructor exception problem and opened the possibility of Lisp-style "restarts." Unfortunately, the standards committee did nothing of the sort, and instead defined the default behavior of destructor exceptions to be "abort the program." Here I was, thinking that "abort" was something that should only be called under the gravest of circumstances, yet a C++ program might abort because it was unable to write to a log file. Typically, one will hear the argument that exceptions should only be called in "exceptional" situations, which we are meant to think only means situations that are grave enough that "abort" can be called -- yet the standard defines exceptions for errors that can be corrected and for which a program exit is not strictly necessary, like an attempt to write to a full disk (and lest you think that this is no big deal, I have seen people lose over a 24 hours' worth of work because a program exited when the disk was full -- and you definitely would not want a 911 dispatch system to shut down just because it could not write to a log file).

You point to the use of the Linux kernel in critical telecom and financial systems as if it were evidence that the Linux kernel can be relied upon. I see the Linux kernel panic periodically. I doubt you would be happy if your 911 call were dropped because some telecom equipment was suffering a kernel panic. You see Linux in these places because technically better systems are too expensive and there are too few people who know how to configure and use those systems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: