I use C and C++ extensively (30+ years writing C, 25 or so writing C++).
I much prefer C when writing systems-level code. It's simpler and a lot more predictable. You don't get the illusion that things like memory management are free.
I /have/ written drivers in C++. Here you have to be very careful about memory allocation (calling 'new' in an interrupt handler is usually death, though I've also written very specialized allocators that work in IRQ contexts). STL isn't going to cut it, especially if you're writing something real-time that needs very predictable performance.
So, my basic prejudice is that while you can use C++ for systems stuff, you still really need to be close to C semantics, so you're basically buying namespaces and "C with classes" at the risk that some yahoo later on is going to #include <map> and utterly hose things . . .
Honestly one of my favorite things about C are that the symbolsresolveuniquely. Efficient C++ code often involves templates, and good luck trying to get a tag-lookup tool that can resolve template specialization rules...
My own experience is that RAII has been very useful in low-level work when used to capture cleanup/finalization. Results in fewer if-statements and SLOC.
Issues of real-time performance and new vs malloc vs in-place/nothrow new also seem like orthogonal concerns here. The larger problem is that yahoo who's using the wrong algorithm and/or maintaining driver code w/o benefit of code review ;)
My (admittedly short) experience writing kernel drivers with C++ was that RAII could be great help, but there's no way you can rely on any library you did not write yourself (and one that you wrote with the express intent to use for low-level work at that).
C++ code just tends to assume it's fine to do whatever crazy stuff it assumes needs to be done, like allocating 4 bytes at a time using new.
C code is usually more careful about that, and quite a few packages offer control of their memory allocation in C. (Yes, C++ does allow you to overload new and delete, and have allocator traits, and more. Come back when you've actually implemented this for a library you did not write, and we'll discuss war stories if I can remember them. It was 2001, it was ugly back then, and I'd be surprised if it is better looking now).
For my low level stuff I use C with namespaces. Seriously. C with namespaces beats the hell out of both C and C++ for most stuff. You can still link to C and C++, and you don't need to worry about the myriad of quirks you have in C++ and its compilers, while you can have some decent organisation in your code.
He probably means he uses a C++ compiler, but the only C++ feature used that isn't also in plain C is the namespace syntax.
This makes it easier to package up code into modules, since you can have namespace foo { bar } and namespace baz { bar }, without worrying that the two bar symbols will collide.
That's correct, and I use GCC for this. For some time I've also used m4 to pre-process namespaces and output C99-standard code. That would be my preferred way when I can get away with it because C++ compilers introduce uncanny little differences that can haunt you in some hedge cases, but it adds an extra step and then I'm doing something that doesn't work in the outside world.
For me, the biggest argument against C++ is that programmers can make much better use of their time than learning the massive list of quirks in C++ compilers. Having more features is not a deal breaker per se, even if it does cater to messy code. In real world situations you can usually choose a compiler for the whole team and a style for the whole team. This works in my company. But you have to be aware that this doesn't have to work in all situations, or even most situations.
With inline functions and dynamic variable declarations already in C99, I really think the complexity/usefulness compromise of adding anything else that is also available in C++ is very, very negative.
For other higher level features there are many other languages I'd take over C++. Ruby, LISP, Python, even Java or C# if you're that fond of C-ish syntax. For low-level stuff and speed, nothing higher level than what I said above. In my team C++ is nobody's favourite language but it's still what we use the most. Sadly, there are many other factors other than personal preference.
I'm interested in your comment about m4. I've never really used it except for sendmail configuration. The idea of using it to pre-process namespaces is interesting.
Would you mind expanding on how you set things up and what you do to use m4?
Sorry for not responding to this earlier, I just missed it.
In m4 I simply created some sort of preprocessor that renamed variables to have some sort of prefix depending on namespace. I haven't used this for a good while because then I have the issue that MSVC doesn't support other modern C99 things that I consider basic unless you compile in C++ mode, thus negating this. At work I need MSVC compatibility...
Hope you get to read this.
If I were to do this again today, I'd use OMeta. Give it a look, it's dead simple if you have the time to read through it.
#include <map> is hardly going to work at the first insertion that will call the allocator.
The argument "let's use a castrated language to avoid mistakes" falls short. What about communication within the team?
Properly used and tailored, the STL is extremely adequate to kernel development. The power of the template engine enables the compiler to do some very clever optimizations.
Huh? That link claims "The STL will not work in kernel mode and cannot be used", which is not much of a surprise given the STL's use of exceptions and dynamic allocation.
In any case, kernel guys seem to prefer their data structures in intrusive style rather than the container style ubiquitous throughout the C++ standard library. (For an example, see the intrusive list and red-black tree implementations in the Linux kernel.) Intrusive structures allow for exactly the kind of low allocation, very memory friendly layout that kernel developers are looking for - no clever optimisations necessary.
C++ features like stronger type checking, more careful casts and RAII are a more compelling argument for the language.
Indeed, I've seen a talk from Bjarne recently where he said that by considering what is required to make RAII work in the presence of exceptions you can basically derive the entirety of C++..
Everything that C is, C++ is as well. The single simple feature that I don't have to declare all my variables in the beginning of the method but where it's actually used is enough to kick the ass of C to the curb. Then there's the rest of the features including the little thing known as OO...
To object against C++ with the argument that incompetent coders can do bad things... well, it's nonsense. Incompetent coders can do bad things in ANY language. For competentt coders thugh, C++ means increased productivity and resusability, and in the end, that's what counts.
The single simple feature that I don't have to declare all my variables in the beginning of the method but where it's actually used is enough to kick the ass of C to the curb.
Has anyone really not seen this in the last two years?
Anyway, Linus may be a very experienced C programmer, but that doesn't mean his opinion on C++ carries much weight... I'd be more interested on what someone who actually has a lot of experience in using C++ says. Especially with modern C++ and recent tools, libraries etc, which are very different from what was around five or ten years ago.
I suppose it is nice for a change for someone bagging out C++ (however inaccurately) to be advocating C instead of a managed or interpreted language though!
"I'd be more interested on what someone who actually has a lot of experience in using C++ says."
I have considerable experience with C++, going back to when it was called C with Classes and you used a preprocessor to convert C with Classes code to C and then compiled the C.
What Linus says is pretty much true. I don't expect it to matter though since when people say "I'd be more interested on what someone who actually has a lot of experience in using C++ says." they don't really mean it.
There is another guy I know of with considerable experience in C++, Yossi Kreinin, who knows more about C++ than Stroustrop. He has a very nice web site that is one of the best references in C++ I have seen, going far deeper than others. Yossi is the number one expert on C++ in the world today. I recommend his insights to all: http://yosefk.com/c++fqa/
I've used C++ more than any other language, both professionally and for personal projects, for 13 years. I freely admit that C++ is complex, and it takes considerable effort to understand how to write good C++ code. Even after 13 years, I know that I do not know every rule. Nevertheless, I have been very productive with it.
I got about 3 pages into the meat of the FQA. I can't bring myself to spend the time to go further, because his counterpoints already seem overwhelmingly specious to me. This leads me to wonder on what grounds you characterize him as "the number one expert on C++" "who knows more about C++ than Stroustrup".
The best reference on C++ that I have seen remains the C++ FAQ. (http://www.parashift.com/c++-faq-lite/) I have also always enjoyed Stroustrup's guidance on his language, and I'm inclined to continue believing he knows at least as much about his own language as anybody, until I see some more convincing evidence to the contrary.
You can pretty much Know C inside of ten years of using it professionally. I swear after twenty years of using C++ I still come across things I've never seen before, the rabbit hole very deep indeed.
Now with the new C++ there's a ton of new stuff to absorb.
There's never a hope of knowing 100%, but it is reasonable to be familiar enough to know 80% with ten years of experience and 95% with twenty. By thirty if you're the kind to take knowing your toolset seriously, 99% is possible. That last 1% is the "you can't be serious" aspect of the C standard.
Isn't every critique of programming languages necessarily very personal? After all, programming languages are languages. They help people develop and express ideas, and we don't know a whole lot about how creativity actually works and how it is influenced by the tools we use in the process.
Take inconsistencies for instance. An inconsistency between two things means that one but not the other can be inferred by applying a particular principle. But who chooses the principle that is applied to decide whether or not there is an inconsistency?
There is no shortage of "principles" in people's heads, many of which are not at all formal, maybe not even explicitly stated and they might not be called principles at all. They may be complex webs of associations, habits, or patterns that apply in some but not in other application areas.
Another such thing is the balance between generality and special casing. You could probably find out empirically what level of either is outside of what most people can use productively. But no individual is most people. So everyone has to decide what makes them personally productive.
The only regrettable thing is that so few people even try to find out.
"C++ is a horrible language. It's made more horrible by the fact that a lot
of substandard programmers use it, to the point where it's much much
easier to generate total and utter crap with it."
Haven't seen it. I've seen many other examples of Torvalds being an ass, but I don't bother to keep track really.
It does make me wonder though how he's able to get away with responses like that, when most other "founders" (not sure what the proper term here really is) would have the majority of their users/supporters just go elsewhere.
EDIT: ... Let alone have supporters who get defensive enough to start downvoting someone who stated the obvious (he did act like an ass), while asking a reasonable question too.
I've read Torvalds' post at the top of the page. I've also read the balance of opinion on this page. The majority view is that C++ really is a terrible language for writing a kernel. Fine, I accept that.
Now re-read Torvalds' post. How many people here would want to work with someone who regularly expresses himself like that? I know, I know; substance is more important than image and so forth. But that post reminds me of what people said working with Steve Jobs was like before he died.
You are 100% right about not wasting one's time with jerks. But I disagree that Linus regularly expresses himself like one. Really: open lkml.org and search for his posts. Most of the time, he's an ordinary project maintainer.
Sometimes, he's quite the opposite of a jerk [1]:
WARNING! I wasn't kidding when I said that I've done this by reading
gtk2 tutorials as I've gone along. If somebody is more comfortable with
gtk, feel free to send me (signed-off) patches.
Just as an example of the extreme hackiness of the code, I don't even
bother connecting a signal for the "somebody edited the dive info"
cases. I just save/restore the dive info every single time you switch
dives. Christ! That's truly lame.
One must remember that his role as maintainer of linux requires him to have ultimate, non-ambiguous opinion about a lot of stuff. He's the judge of what effectively goes into the "official" kernel tree. Moreover, most of the kernel contributors don't deal with him directly — git is the materialization of that modus operandi.
When he does express himself like that, however, it becomes news. He always has a reason, though. In this case:
Please don't talk about portability, it's BS.
Is this how normal people start a conversation? Who's the jerk, here?
Except it's not true: The Commodore Amiga delivered multi-tasking, shared objects and sophisticated IPC in only 256k of main memory. They cite OO techniques as being key.
"When I first looked at Git source code two things struck me as odd:
1. Pure C as opposed to C++. No idea why. Please don't talk about portability, it's BS."
Come on. Dude was picking a fight, and he got it. Maybe there are other examples of Linus being an ass, but I don't see how this qualifies.
I haven't worked closely with Torvalds, but I have worked with other OSS project leads that have such asshole responses roughly proportional to the popularity of the project, and have a theory on it.
I don't know if it's intentional or not but many OSS project leads respond like this to questions that come up a lot but don't merit a response. How many times do you think Torvalds has had to field "questions" of the form "C is crap, you should use C++, it's like C only better!"
To a certain extent it is garbage in, garbage out, if you ask a stupid question, you get a stupid answer.
This is the situation not only with OSS projects, but BBS, forums and / or IRC chat rooms in general. Given the technology, it is amazing tha we haven't developed bots that will automatically answer these questions based on previous responses.
That wouldn't even be too hard. I've seen some help channels that have bots with commands to answer FAQs (mostly in the form of "here is a link, READ IT"). A simple question answering AI would be interesting, though.
That still needs human intervention. I think stackoverflow has a bot named community. I am not completely aware of how it works, but if it is completely automated, it is good.
And more than that, he's right: if you care enough about things the kernel cares about, you really, really don't want C++ "idioms" in kernel. And you don't want to work with the people who don't understand that. So it's really convenient to piss such people off, just as he said:
"for something where efficiency was a primary
objective the "advantages" of C++ is just a huge mistake. The fact that
we also piss off people who cannot see that is just a big additional advantage."
He delivers personally but Linux as a kernel is a pile of crap, mass duplication mdadm vs dm posix capabilities vs actual unretarded capabilities... Linus is good at coding but absolutely horrible at direction and that's exactly what Linux needs a direction for the better instead of evolution of substandard abstractions.
This is a cherry-picked email that, while abrasive, does not single out any particular person for abuse. Messages about individual people on the linux-kernel mailing list are usually pretty professional. (Mind you, they rarely get as downright friendly as the Perl or Python people.)
Counterintuitively I think the underlying reason is that C++ follows the exact same philosophy as C. For all of the following statements you can replace C with C++ and it is equally true:
Every feature X acts in such a way to make implementing it at the time of its invention as expedient as possible
The best heuristic for guessing how a feature you don't know (or forgot) works in X is to say "let's imagine X was just as it is now, only without this feature, how would I implement it?"
All of this means that in order to be a competent X programmer, one ought to be able to write an X compiler without too much outside help.
I could go on and on like this, but hopefully you get the point. Now it is clear that the philosophy for X works for small languages (like C) but could easily have scaling problems (like C++).
It also becomes clear why so many places subset C++, but don't pick the same subset. The problem isn't that some feature is broken, the problem is that there are too many features without an overriding rule that most people can apply. I would also believe that Bjarne Stroustrup can be quite effective programming C++ since he knows the language intimately enough.
I've been writing C and C++ for a long time (kernel and user mode) and what I find is, it takes a fair bit of discipline when writing C++ code (like hiding new/delete for stack-only objects or ensuring operator= works for heap objects and so on). Debugging with STL and templates can be PITA since the error messages are so convoluted in most compilers. One thing I would agree on Linus, is the talent pool of disciplined C++ programmers is pretty scarce. There are tons more C programmers that have enough OO experience of faking vtables and building structs-with-callbacks to simulate class inheritance and what not.
I think we're missing the point here. Taking the side of the customer of git, I'm happy. I type things on a CLI with git and things happen pretty fast. I'm happy. I don't care whether you wrote it with C, C++ or Haskell. It works. It works just fine. To Linus's point, I've never used, or even heard of someone using Monotone. That must say _something_.
For one, it is horribly slow. I do not know for a fact that its slowness is caused by C++ as a language or by the quality the programmers who wrote Monotone... but it is slow nevertheless and that is all I know. By contrast, git is extremely fast. Go figure.
John Carmack is a C programmer coding in C++ which fits pretty well with sirclueless's point about that it is the C++ programmers that are the problem. All his games before Doom 3 were implemented in C.
I imagine he had some help with that. And a lot of what he's been talking about recently is about tools for keeping code quality high; he obviously places a premium on good code over working code.
My point is, you can write good, readable, performance, large-scale codebases in C++. Note that Carmack wrote his previous engines such as Quake3 in C, and switched to C++ at some point. I think one of the reasons was that as their programming team size grew from 5 to 50 at id, C++ started making more sense for them. If you look at a book like "Large Scale C++ Software Design", the subset of C++ it uses is about the subset that I think makes sense in practice, roughly C with classes. To be fair, I think that was the language subset that actually worked in compilers at that time.
Irrespective, I don't doubt that C is a valid and perhaps better choice for writing an OS kernel. There's less abstraction in C, and you have to worry less about making sure you don't use a language feature of C++ that has a hidden cost.
Linus is full of BS. To say that STL and Boost is total crap, that he likes to piss off C++ coders, and that he actually seems unable to see the advantages of C++ to C.. only shows his own ignorance, and that he has a huge chip on his shoulder. I'm very disappointed.
I tend to take everything Linus says with a grain of salt. Not because he's wrong, or because he doesn't know what he's talking about, but there's enough of the puckish troll in him that I tend to read his posts more with an eye to their intended effect, than to what he's actually saying.
There are plenty of applications for which c++ is a perfectly sensible language choice. Git isn't one of them.
If I understand what proponents of C++ say, C++ is supposed to be suited best for medium-level programming where abstraction is helpful but low-level constructs and speed are still helpful. It's supposed to give you many of the benefits of higher level languages while still giving you high performance. It's intended to be widely portable but still easily hook into special OS-specific facilities.
This description sounds like an excellent language for git to me. And in fact, while I don't like C++ much in general, if properly managed, I think a project like git could do well if written in C++.
It's that "if properly managed" bit that would be a nightmare from hell to manage... easier to just do it in C.
What it's intended to do is rather different than what it's actually used for in real life..... code talks, anything else is just smoke, right?
IF someone can come alnog and show us a better way in any language, then tehre's something to argue about, otehrwise, the guy who has working code wins over the guy without it, always.
There's a large faction in AAA games that thinks C++ is a stupid idea. It's debatable who's right in that debate, but good points are being raised.
The problem is not so much that C++ is a worse language than C - it's that it makes it insanely easy to shoot yourself in the foot in hideously complex ways that take forever to unravel. See e.g. two phase name lookup - http://blog.llvm.org/2009/12/dreaded-two-phase-name-lookup.h...
There are plenty of other places where the design constraints of C++ have forced it into a dark corner on the edge of the realm of madness. It _does_ buy you additional abstraction, but there is a price you pay for that.
Personally, I'm not happy with either camp. C shows its age - it's from the 70s - and C++ is just out of control. So I'll continue to use both unless there's a decent replacement. (I'm squinting at Go, Rust, and BitC - and none of them are quite what I'd like to see)
I might suggest taking a look at D. It's more or less C++ redesigned from the ground up by a C++ compiler writer, so it's much more consistent and a lot more pleasant to program in. I've had bad experiences with the third-party libraries, which were sometimes inconsistently documented and half-finished on account of the still unfortunately small community, but the standard libraries are excellent, especially everything that Andrei Alexandrescu has touched. For elaboration: http://drdobbs.com/article/print?articleId=217801225&sit...
It’s a great pity that Objective-C doesn’t get more attention in this regard. It’s C, so it features all the simplicity, elegance and existing APIs. And at the same time it offers a good, simple and flexible object model with almost no surprises. The performance can be very good, too, as proved by the Apple runtime, and you can always drop to lower-level tricks or plain C when you need it. The syntax takes some getting used to, but then turns into another advantage, since it’s very self-descriptive. Again, it’s a shame it gets so little attention outside the Apple world, because it would be a perfect match for many use cases.
The problem is that Obj-C message passing is always going to have a cost. STL implementations can inline accessors, for instance, to provide containers and algorithms with essentially zero overhead that can even be faster than raw C because the compiler has more type information.
That’s a non-issue for the vast majority of people. As an example, I have written games in Objective-C for the first iPhones. In a game running at 50 fps you have about 20 ms to get a single frame out, and still I could freely use message passing in the inner game loop without giving it a thought. See also some older measurements by Mike Ash: http://goo.gl/DBTPE.
An Obj-C message is still about 5x slower than a C++ virtual function call, but even that is too slow for inner loops of expensive algorithms. For example, I'm doing DSP at 44.1k ops per second and I can only afford about one virtual function call for every sixteen samples but I can put my samples in an STL vector with no access overhead vs a raw array.
It's true that most people don't need the performance you can get with C++ but if you do it's still really your only serious option.
According to Mike’s measurements I linked above a cached Objective-C message send is faster than C++ virtual method call, so in a loop both languages should come pretty close. You can also get a pointer to the function implementing a method and call it directly. But I think we now understand each other – yes, Objective-C is not as fast as C++ in some cases, but those cases only matter to a very small number of people.
Also, it's _really_ easy to optimize those inner loops if you need to in Objective-C. If message passing is too slow, just turn it into some C function calls and you're done.
Low-level image procesing library supporting multiple pixel formats. How would you support anything from float16_t to int64_t without templates? And with templates you can selectively rewrite functions for some types in assembly.
The problem with C++ is, it is not language that has been built from scratch. It is just a layer upon layer upon layer upon C written by various people for various usecases. So there is no way to "properly manage" a C++ project. Unless it is maintained by one single person (or few like minded people)
I was going to disagree with you by saying that those layers are there for a reason; and they are there for a reason, mainly backwards compatibility. But thinking a bit more, I have to agree with you that still, those layers are the problem with C++. If you want to maintain old code, you would have to be conversant with loads of different idioms, it is as if you are maintaining multiple languages, starting from C and going on to the latest C++.
To write new code, C++ can be very clean, and C++11 is a huge step in the right direction for this. But to maintain code, especially code written by someone else, C++ can be a little bit hard. And it can never be easier than maintaining C code, because maintaining C code is a subset of maintaining C++ code.
An alternative to C++ in this situation is to have a C core or set of C libraries, with bindings up to something like JavaScript or Lua. World of Warcraft, Emacs, Firefox are a few popular examples of that architecture. GNOME 3 works this way too.
It's pretty important to have good automation for the C-to-high-level conversion, for example GNOME has a gobject-introspection to do this, and Firefox has XPCOM. Otherwise it becomes too tedious and bug-prone to be gluing two languages together by hand all the time.
I suppose node.js could be considered another example of this approach, by writing the super-tiny-and-fast event loop and http parser in C and then putting all the application logic in JavaScript.
Now this has me a bit confused, why no love for STL ?
I wish Linus had added more detail. Is the complaint that the binaries are too big (not quite, if you strip them of the unnecessary symbols) ? or is it that it can be a tedium to go over the reams of error messages that compilers spit out when things go wrong. The second point I am willing to concede, it requires you to read messages inside out, which lisp does train you into doing. Or is the complaint about something else entirely ?
Something else that I hear often that bothers me is the claim that STL adds huge runtime overhead. Maybe it was true with the old compilers, but with the current ones, GCC4.5, Intel its not true at least not in a noticeable way. The whole point of STL was the ability to generate optimized code. I have actually verified that the iterator based access patterns on vectors for instance gets optimized away into simple pointer based indexing into memory blocks.
I like STL, in fact I will go so far and admit that I will not code in C++ unless I sense that I will benefit from STL and or templates. Though STL gets used often merely as a container library I think you get more out of it when you use its algorithms. I really like it that I do not have to write for loops (and potentially get the indexing wrong).
If one squints the right way, it has map, reduce, filter and
map-reduce all built in (transform, accumulate, innerproduct) though I miss a vararg zip function. An un-ignorable side benefit to using the STL primitives is that if a parallelized version comes along the way, you get a fairly painless way to make your code parallel. You do have force some of your snippets to be sequential to account for the fact that there is not enough work to parallelize. This is the direction were GCC's STL library is headed with its parallel_mode. http://gcc.gnu.org/onlinedocs/libstdc++/manual/parallel_mode...
@cube Appreciate your comment. For writing kernels and VMs I gladly buy your argument, to add to what you said there is the ABI mess.
STL containers are essentially the opposite of what's used in many C programs, Linux kernel included.
In C, in order to string several data items into a collection one would add a container-specific control element to the data item, and then feed a pointer to this element to the container code. Container sees only these elements, and nothing else. On one hand, accessing actual data item obviously requires casting and offsetof'ing, but on other hand it allows placing the same data item into multiple containers, and this is great. Think - multiple key-value stores, each keyed by a different field in data item. Generally speaking, the paradigm is that there are THE data items, and there are miscellaneous supporting structures that help organizing tehm.
In C++, container is the ultimate destination for a piece of data. It is the owner of a data item. If I want a piece of data to be on a list and in a tree, I must decide which will be the "primary" container. Alternatively, I can store a pointer to an item, but this effectively negates safety benefits of STL containers.
I don't see this. You can have a container of pointers in C++ just as you can in C. No special ownership semantics is assumed, nor do you have to decide which is the "owner" any more than you do in C. Or the problem is the same, at any rate. Plus you're not constantly casting <void*> to things.
You can have a container of smart pointers, too, if you really don't want to think about ownership.
Since you can write C in C++ of course everything can be written in C++ too if you don't use C++ "common" libraries and roll your own infrastructure. Once you care about "guts" of your program enough that you have to care about allocations, you'd see that as soon as you're thinking about "containers" having "pointers" to "objects" you're probably on the wrong way. Because "objects" often contain pointers that make part of more data structures but also can be of different size (you can't even "sizeof" them) and also you want to be the one who controls when and where each of them is actually stored (in a sense of the memory block).
I've done these things in C++ without boost, more "in a C way" inside of the separate modules (those that were critical) and I still wouldn't use boost monster, the amount of code dependency is much smaller that way.
Container of pointers is not exactly what STL containers are about, and smart pointers is a big can of worms in themselves as I'm sure you know.
Another angle to consider is this - if container control elements (like list_head) are stored in the actual data items, they are effectively pre-allocated, meaning that inserting an item into the container involves no heap activity, and this helps simplifying the (error handling) code quite a bit. I mean... all in all, even it seems hacky, it is a more elegant idea. STL to C-style containers is what Java is to C++ - something with all fun drained from it :)
There are a few problems with STL when writing kernel or API code.
One is due to it's completely generalized nature. Since it's not heavily optimized for either speed or memory use, it tends to perform rather poorly in both areas. Not that acceptable when you're making kernel code, where it's often better to write optimized data structures for the problem at hand rather than just relying on templated code. In that situation, where performance is king, you want to tailor your data structures to fix your problems, rather than the other way around.
STLPort and Boost fare better in this respect, but STLPort in particular leads to the second problem.
The second problem is due to the std namespace. Using the std namespace means things that link in your code, even dynamically, cannot necessarily use other STL implementations. So if the kernel uses STLPort, that means that every other program written on top of the kernel needs to use it.
Essentially, listing all the hoops EA went through to have a useable STL for games. (And for programming purposes, AAA console games and OS kernels are pretty close)
I'd still like to see how both these papers influenced last C++ standard at the end (I admit I didn't try to follow) -- is there finally a "standard" way to do all that?
Notice the dates on these papers though. 2005 and 2007 are a long time ago in C++ compiler land. I'm using the STL in very low latency DSP code with zero problems. You have to make sure you don't allocate memory in realtime callbacks but that's just as true of malloc.
Dates are irrelevant if you really want to control where something gets allocated. Last time I've checked there weren't the level of control described in the linked articles in "standard" C++. If you know how I can achieve it, please write, but please don't dismiss my need with "you wouldn't notice everything is fast enough it's 2012."
Can you name any language that does give you that level of control in its standard collection classes? At least C++ gives you the power to define your own if you really need to.
"C++ gives you the power" exactly because there's full C underneath -- once you avoid "standard" containers, "OOP," "STL," "boost" and "best C++ practices" you can still malloc and place structs where you want. Sometimes you really need that level of control, and you get it from C. AFAIK, Google's Go simply just says "we don't give you that level of control." That's why C won't be fully replaced with Go. And that's also what "omg I need C++" people don't understand. I don't blame them for not knowing, they didn't have to work on such problems.
But I blame them when they insist that what they do is enough for everybody. It isn't.
D give you the same control that C, and same time you have a lot of funny things that have other high level languages at same time that is'nt a kraken like c++
In standard-land, not so long ago. The process is slow.
As for the Lakos allocator model, two issues come up:
(1) You get an allocator pointer in everything.
(2) Each allocator still doesn't get to be simple: if the container passes the allocator down to its elements, the allocator has to worry about very differently-sized allocations. E.g. rb-tree nodes and the contained types.
The STL is a huge leap over C, but it could be so much better.
On one hand, it's not particularly good for really low level coding, as other people have explained.
But on the other hand, it's not really convenient for higher level coding, either.
To see what I mean, compare the interfaces of std::string [1] and Qt's QString [2]. Common, simple things like converting numeric data to/from string, splitting into substrings, and find/replace are a PITA in the STL. The containers are nice, but even they could use some work.
I know it'd never happen, but I think the non-GUI parts of Qt would make a really nice standard library.
I think that std::string does not support conversions as part of the class because those operations do not need direct access to the internal data of the string class. They could be written as helper functions (as opposed to member functions) easily without loss in efficiency. Member functions are used when they need direct access to the private fields. For example, C++11 has a new group of functions, std::to_string(...), and their counterparts std::sto*(...), which are not member functions [1].
"Portability" doesn't necessarily mean "runs on every system ever made". As pointed out by your parent, it runs on pretty much all UNIX flavors, even the ancient and obscure ones. If you want to blame anything for the abysmal git functionality on Windows, it should be Windows, not git.
IMHO, "being portable" is more about "being available to many users" than "being available to many systems". Focusing on ancient and obscure systems (used by who, exactly?) while ignoring systems used by a large potential users population doesn't make a lot of sense to me. And I don't see how you can blame that on these major systems either.
Willing to specifically support archaic but important systems who desperately needs git (idk, NASA supercomputers perhaps?) is fine, but that's kind of a niche strategy; it's not about portability anymore.
Making sure Git is portable across system architectures is quite important.
Also, portability in the "being available to many systems" thing is important for a lot of developers. I build stuff that currently has to deploy on SPARC/Solaris, but there are plans to make it so that in the not-so-distant future, all that stuff will be moving to virtualized clusters of x86_64 Linux. Portability in the narrow UNIX sense is pretty damn important to a lot of people.
I don't see how your example contradicts my point of view. When you move things from a fairly popular system to a widely popular system, you expect it to work as well as before. Now instead if you had chosen an obscure system nobody cared about, of course you would have loved your tools would be supported as well - but hey, you knew that choice was risky.
Of course portability is pretty damn important : it gives you the liberty to choose the system you want (depending on your needs) while keeping the same user experience everywhere. But x86_64 Linux systems (for instance) should not be supported because they run Linux : they should be supported because a whole bunch of people use them and need that support.
It was basically C with encapsulation. So using C++ as a better C.
It worked well. It didn't use STL nor boost and the exceptions weren't as you'd know them. Memory management was much more carefully counted than is normal in C/C++ programs, and reservation meant commit.
It was programmed by a pretty clever disciplined bunch. Well, most of us were ;)
This feels like it was written by a guy who wrote a popular OS and has had people kissing his ass for 15 years. Basically the coding version of a diva musician.
Had he worked on any would that have been a valid excuse to be derogatory to others over something as childish as what programming language they use?
If you're going to ask me if I had worked on any OS kernels or distributed version control systems, the answer is no. But I would also never degrade people who might otherwise be willing to help contribute to my projects based on some idiotic notion of programming language superiority.
At this point I don't even expect anything better out of him because he did something great for the world. I expect better just because at this age he should be an adult.
Torvalds' post to me looked to attack C++ programmers in general. I'm not ignoring that the initial comment was pretty useless, but even so there are better ways to handle those than the way he chose.
Very few programmers spring fully-formed into kernel-level hackers. The correct place to filter is at the code level, not the person level. The 'right' people have the ability to change their approach until the code passes the filter.
C++ is many, many orders of magnitude more complex than Java, as Java was designed to be a C style object orientated language done correctly (i.e. with the foot shooting elements removed...)
I have used C and C++ extensively (10+ years both). I once tried to implement the C++ inheritance model in a C program (basically just fill up the virtual table manually). It was a complete nightmare, wrought with silly mistakes all over the place.
Probably Linus is right that C++ does not belong in the kernel but for application level code reinventing all the basic C++ things in C seems like a waste of time. Linked lists, virtual methods, some form of exception handling.. why waste your time? I laughed pretty hard when I understood how the linked list implementation in the Linux kernel works (pointer arithmetic tricks + sizeof). People don't even invent anything different from whats already in C++.
The key to using C++ is to find a comfortable subset and stick with it. I haven't used C++ on projects with more than 3 people yet so I don't know the pains of agreeing on exactly which subset to use but I imagine it could be done.
One thing about inheritance: It's not necessarily a good way to model things. Often it's a procrustean bed where you try to twist your concepts into what the language will let you express.
I've seen a lot of people use inheritance to save keystrokes. "I can save time if Plane inherits from Bus, they've both got wheels, after all." That's a kind of "typing" system . . . but not the kind that people expect :-)
Honestly, the world does not often cleanly decompose into a class tree. Usually it's pretty messy.
I don't know C++. The accepted response does not start off well.
"C++ is a complex language, and if you don't learn it properly, it's very easy to shoot yourself in the foot. And that is also why you shouldn't listen to most non-C++ programmers hate towards C++. Most of the time, they didn't learn the language properly, so they're not really able to judge the language"
What does that mean? It sounds like, that without some significant amount of knowledge, you can't understand it's benefits.
Thinking of the languages I know, I think the things they are trying to improve are apparent with a basic knowledge of the language.
Most people learn C++ as a feature-by-feature increment over C. This means they generally write function-oriented programming with objects instead of object-oriented programming, mix exceptions with returning errors, stick with pointers and char*'s instead of references and string's, and end up rewriting data structures before finding out about STL.
That being said I use some features of C++ I like (inheritance and STL) and while the result is somewhat bastard C/C++ it does what I need well, which at the end of the day is all we should really ask for from a language.
Linus's objections seem centered on the fact that it makes it easier to generate bloated code. While this may be true, there's nothing a little self-discipline can't control.
STL and Boost may not make sense for the kernel, but there's nothing wrong with using C++ classes at their most basic. The kernel would be far more readable if it used classes and simple inheritance rather than re-inventing the wheel with structs full of function pointers.
C++ gives you more flexibility with regard to encapsulation as well - it's hard to argue that that doesn't lead to cleaner, safer code.
"...rather than re-inventing the wheel with structs full of function pointers."
I think you got that backwards. Structs full of function pointers predate c++. Going for the ad absurdum, why bother inventing C++, it's just a re-inventing of the structs full of function pointers wheel?
You're right - "re-inventing the wheel" is probably the wrong phrase - I used it because it conveys a sense of needless effort when there is a better alternative.
Basically, those who don't have access to basic object-orientation are doomed to implement it themselves in a messier, more verbose form.
I think the argument that Torvalds was making here is that that's exactly what C++ is. Messy and unnecessary. You can write C++, but if you do so without the mess, you're so close to C that there's no point in using C++.
I, on the other hand, would argue that C's lack of object-orientation somehow dooms one to implement it. OOP is _not_ the pinacle of software development. Further, I would argue that "structs full of function pointers" neither, necessarily, represents a desire for object-orientation nor is it messy.
... I'd say it's hard to argue that you're not implementing C++ style OOP, and that it wouldn't be cleaner to use C++.
Now, if you're just going to stick to the basics of C++ (ie. avoid templates, operator overloading, etc...), then there's a fair argument as to whether it's worthwhile. Certainly it's not worth rewriting the Linux kernel at this point.
But, I don't think Linus's comment that "C++ is a horrible language" is fair.
(I'm sure most of that is just Linus being Linus).
Just to follow up, here's a posting from Linus around that time: http://goo.gl/tTvg2
"The changes to get linux to compile with C++ were very minor, so if your extensions are well-written, it should be no problem at all getting it working. Gcc gives reasonably good error messages anyway, so you can usually just try to compile the old code and fix the problems as they crop up.
This saddens me as C++ is growing into such a beautiful language. C++11 is a major step forward, and I don't really buy in to complaints that are not about the current standard.
Bad programming is language agnostic. Disliking a language generally implies that you don't know enough about it to really get it.
I find it interesting how 'everyone' seem to hate C++, yet, so many uses it. If C++ is so bad, why does people continue to code with it? Personally I have just started to look at C++ due to Microsofts integration with it on WinRT/Metro for Windows 8.
I mean, MongoDB, Node.js, Microsoft's Windows Runtime (which provides access to the systems API for both JavaScript, C#/VB.NET and C++), MySQL, Membase, Haiku, Chromium are all notable examples of software written in C++ that seems to be quite well.
May I recommend that if you're looking for a more considered critique of the C++ programming language you look at the C++ FQA (Frequently Questioned Answers) instead?
Although it is true that certain languages give you the flexibility of writing "utter crap", I can also write pretty crappy unmaintainable code in pure C just as easily. Nevertheless, after 14 years of coding in C++, I love how beautiful my C++ code comes out. All that "OO crap" makes my code easier to understand, debug, maintain and extend. I would take that at the expense of how much more difficult it is to create binary compatible libraries in C++ (something quite easy in C). I can see how for a kernel, that would be more important.
I've been working on my own DSP library in C++ on and off over the past year and a few times I've gone through the thought experiment of rewriting it in pure C.
Every time I conclude that all I'd gain in the process is the extra work of finding some third party collection library, adding manual resource management where I currently use RAII, and rolling my own struct-based object system, probably with a buch of macro glue.
But Mercurial was written in Python and performance is comparable to git. So, his argument is not correct about the performance reason on choice of C over C++. Even interpreted Python which was used in Mercurial was not speed bottleneck. Yes, I know that most of speed critical sections was written in C. But almost all project in Python.
Also Darcs was written in Haskell and Bazaar with Python.
I hate C++ too (over 10 years of experience). But Linus is full of BS too.
Yes, choice of C language in Git is right choice. But source code of Git is horrible and unmaintainable mess.
Why not take the kernel (it's gpl code) and re-write a major part of it in C++? And then fork that and get others to use it? That would either prove that C++ can be used in kernel or disprove it. We could call it C++nux ;)
Then, the BSD guys would have even more reason to call Linux garbage and the OS/language/superiority wars would rage on for even more generations. Wouldn't that be so cool.
I think it's partly a gambit or defensive permission, or sort of an allergic reaction to having C++ suggested to him SO MANY TIMES over the last two decades.
I've read this "news" like dozens of times, and every time I can't help laughing. What he is speaking of is generally true. If I am not writing in C, then I am writing in ASM for speed, or in LISP for less code. C++ just doesn't fit in my tool chain, because it really doesn't excel in any aspect.
C++ (and other mainstream languages, for that matter) excels in the aspect that you can actually write useful software with it. Yet, to the date, there are exactly zero useful pieces of consumer software (e.g. browsers, office suites, video processing) written in lisp. Thus, I suggest that lisp zealots stfu about C/C++ already, and either fix their favorite language or invent a new one.
Yeah, of course, right... that's why we have Lisp programs on space probes [1], PS2 games written in Lisp [2], PG's Viaweb, Maxima [3] among others, ITA Software, Emacs, StumpWM, and a ton of other stuff I can't remember right now. Ah not to forget AutoCAD. I don't give a flying fart whether it's "consumer" software or not. A ton of useful software has been and is written in Lisp dialects and there are a lot of application domains that are really hard where you just cannot use C++ (or some other blub) because it's just not expressive enough. Often those are not consumer applications because most consumer applications are really mundane glorified reporting apps that could be coded in BASIC by a monkey (and if you read TDWTF you get the impression that happens more often than not).
Ah, "blub"... the trusty indicator of a brainwashed pg fanboy.
Look, I know that something has been written in lisp, still it is not the kind of software that people use (directly or indirectly) every day and you know it. Plus, you're just being disgustingly arrogant by indirectly saying that your precious lisp is too good to be used for the mundane, everyday software development.
tl;dr: lisp zealots are funny and arrogant
UPD: FYI, AutoCAD is NOT written in Lisp, it just supports extensions written in a dialect of Lisp.
FWIW Emacs's core is C and extensions (the vast majority of the code) are in Lisp. This is a common model for dividing up the work between "engine" and personalization/customization stuff and I think it works well. But it's not true that it's a "Lisp program", or not solely one anyway. Not unlike AutoCAD...
Linus is probably the %1 of people in the world who actually needs to get the performance gain that C has over C++ - as for the rest of us? Probably doesn't matter.
> needs to get the performance gain that C has over C++
With RTTI and Exceptions disabled you should be able to get almost identical performance with C and C++. In some cases C++ compiler has additional information that can allow it to produce faster code as well.
Yup, and if you avoid using any of the new language constructs like objects and whatnot that C++ brings and basically stick to standard C, maybe using just a bit of C++ sugar here and htere, true.... but at that point you might as well be using C, and the argument is really moot.
You are wrong. Those the only two features that incur any performance penalty for just existing, and even then it's implementation dependent and in some cases can be zero (ie. zero cost exceptions). Other things that are possibly slow, like virtual functions, can be avoided in specific cases when needed. This is similar to many how C projects use structures of functions pointers, for abstraction, but not everywhere (including the Linux kernel).
You also need to not use:
STL(most implementations are too slow)
Boost or STLPort(namespace and compatability issues if you're not producing executables)
Most OO features like inheritance and polymorphism(a lot of that is resolved at runtime, which eats cycles)
Function overloading(debatable, since the compiler should resolve the function signatures at compile time)
You're pretty much left with C with namespaces at that point.
C++ can actually be faster because it doesn't throw away as much type information. A good, typed C++ quicksort implementation, for example, will be faster than a C version using void*.
One of the reasons for the performance difference is that you can't inline into the libc DLL blob: If you use qsort(), you'll get a lot of overhead from calling the comparison function.
If you use a custom quicksort implementation and put it into the same translation unit as the comparison function (or compile statically and use link-time optimizations with a sufficiently advanced compiler) you can get the same performance out of C.
I don't completely agree, though I do completely agree when it comes to kernel, embedded, or performance-critical stuff.
One thing I love about Torvalds and that makes him great is that he has zero respect for fads. CS is unfortunately very, very faddish. Good ideas like OOP and Agile become fads, and then become universal hammers with everything becoming a nail, destroying whatever virtues these ideas once had.
Its quite fascinating seeing heavy weights have a go at each other. Most of the discussion here is right over my head but I find myself quite enjoying it. At least its better that the current boxing heavy weight scene. This is about the only post on hacker news that I have read comments on from start to finish. Brilliant all!
I guess what he meant was signal to noise ratio, i mean, Aplayer/Cplayer ratio. Too many Cplayer programmer start learning c++ without any idea how cpu and memory works, if you let them to mess with the kernels it would be a nightmare for Linus.
Why does the guy Linus is replying to think C's portability is BS? Anything besides the fact that any libraries used in your "portable" program must also be portable?
The fact that he leads with "language X sucks because it attracts type Y programmers" is quite possibly the worst, cheapest, and lowest attack I've ever seen in technology. It's sad that a technical hero like Linus would basically behave like such a tantrum-throwing child. It reflects poorly on the whole tech community.
The fact that he leads with "language X sucks because it attracts type Y programmers" is quite possibly the worst, cheapest, and lowest attack I've ever seen in technology.
We see a similar point being made against Rubyists and Rails developers on HN quite often, alas. And, worse, in the "I'd learn Ruby but it seemed like there was too much drama" crowd.
But no one can deny that c++ is the most fascinating programming language in this world. You can either spend or waste as long time as you want to __LEARN__ this language and never can say I understand ALL.
C++ occupies an interesting point on the curve of fascination vs. utility, but to call it the most fascinating language (by your own definition especially) is just hyperbole.
I much prefer C when writing systems-level code. It's simpler and a lot more predictable. You don't get the illusion that things like memory management are free.
I /have/ written drivers in C++. Here you have to be very careful about memory allocation (calling 'new' in an interrupt handler is usually death, though I've also written very specialized allocators that work in IRQ contexts). STL isn't going to cut it, especially if you're writing something real-time that needs very predictable performance.
So, my basic prejudice is that while you can use C++ for systems stuff, you still really need to be close to C semantics, so you're basically buying namespaces and "C with classes" at the risk that some yahoo later on is going to #include <map> and utterly hose things . . .