We use C quite heavily at Bump in our mobile apps. With our recent app Flock, we wrote nearly all non UI logic in C to be shared between iOS and Android. This made moving to a new platform much faster and easier than in times past. A lot of this core code is also used on the server to manage incoming connections from the clients as well.
Aside from the ability to share code, it also makes sure that we are not using anymore memory than we really need. Avoiding unnecessary GCs is an important concern in writing performant Android Apps. Having the core C layer only hold on to as little memory as possible is a huge win.
On a less objective note: When I left undergrad I thought I'd never write C again, but it was such a pleasure working with it again. Knowing (almost) exactly what operations your code is doing is quite liberating after working with so many other higher level languages and frameworks.
> Knowing (almost) exactly what operations your code is doing is quite liberating after working with so many other higher level languages and frameworks.
This is exactly why I still enjoy programming in C so much. Even compared to other languages that I adore and voluntarily use often, programming in C and programming in anything else feels like the difference between treading water in a swimming pool, and treading water in a murky lake. I just have a just a much greater, rather comforting, sense of situational awareness while using C.
I keep on thinking I want to pick up C again (made a living with it 20 years ago), then I remember all the goodness that C++ adds (made a living with it 10 years ago), then I remember that C++ is huge and cryptic and I get all depressed again.
Learn Lua. I went through the same basic phase as you described, and once I'd gotten my competence with implementing Lua bindings up and running, I never looked back. For me, Lua+C is the most elegant combination of language tools I've used, and as a developer with 30+ years of experience, I've used most of them.
Lua really, really rocks - especially if you look at it as the domain tool it is, and not the prepackaged broad-spectrum scripting language that many people consider it to be ..
Agreed. At my last job I did most of my programming in Lua. We used C++ & Cocos2d-x with LuaC bindings for our game engine and wrote all of the game logic in Lua.
I really like a lot of Lua's features, especially closures and the way metatables work. Although Lua doesn't support true classes, it's possible to implement them nicely through metatables.
Fortunately, it is very easy to implement these structures in Lua, if they haven't already. The basic datatype of Lua - the table - can be treated as a hash map, a sorted set, a priority queue, very, very easily .. in fact if you think these things need to be implemented in C, you're missing a big part of the picture with Lua. The power of Lua as a language is really derived from the "morphological" nature of the basic table datatype .. Lua has all of the structures you're asking about, already.
And anything it doesn't have, which one would truly need to derive from the C/C++ side, is a few simple binding calls away .. but anyway, tables will do all you ask, and more, if you learn to use them properly.
When you're using C as a layer under some other high-level language you get those implemented for you. For instance, if you write Python bindings, you will use Python's data structures from C. If those are too slow for you, you'll want a specialised structure - there are many libraries providing collections for C.
There are two ways to look at Lua. There's Lua-the-abstract-language which can certainly be used as a broad-spectrum scripting language. I certainly use it in that capacity.
Then there's Lua-the-implementation(s) (PUC + LuaJIT), both of which are designed from the start to be maximally easy to embed in a host C program, or to delegate work to shared libraries written in C. In my opinion, it is an order of magnitude better at this than Python or Ruby; some of this is a matter of taste but I believe most of the advantages are objective.
Not to put words in the GP's mouth, but I take the "domain tool" comment to mean: "You really should try to use this in close cooperation with C, and not just as a stand-alone language. If you don't you'll miss a significant portion of the value." Note that using it in concert with C need not be done in a single person's head (though this case is also common); Lua+C is an extremely productive way for a systems programmer and an applications programmer to work in harmony while enjoying most of the best of both worlds.
You got my message pretty well 100%, I was just about to type a sentence very close to this one: "You really should try to use this in close cooperation with C, and not just as a stand-alone language. If you don't you'll miss a significant portion of the value."
Yes, you can just run 'lua' at the console and treat it as a local scripting language, with the default bindings available in the distribution Lua-interpreter .. but you can also take the VM itself, make it your own, and put it in your own project for domain-specific applications.
Essentially, MOAI is a collection of C and C++ libraries, tied together into the Lua VM to provide a cross-platform development tool for Android/iOS/Linux/Mac/Windows/&etc. This is accomplished by 'gluing' the Lua VM into the space between a variety of different API's and making those API's available within the Lua environment - in this capacity, it serves very, very well. I could imagine building an entire GUI/OS in a very similar way, and may end up doing just that one day soon ..
Thanks for clarifying. I will be looking into Lua from this standpoint.
Now for a separate question -- do you find MOAI practical for complex mobile apps? Is it one of the most productive ways to build cross-platform mobile apps? If not, why not?
(I came across MOAI a while back and wondered this. Would be good to get an opinion from someone with hands-on experience).
Personally, I'm 100% a raving MOAI fanboix, I can't stop promoting it as a neat way to develop cross-platform apps.
Okay, its not a way to get a native app developed - other frameworks are better for that, although I'm hesitant to recommend any.
MOAI itself is geared towards games, and gaming-style user interfaces. This means, of course, that you can develop all the standard paradigms for UI that exist in the mobile world, with the benefit (or disadvantage, depending on how you look at it) that it will look and feel the same on all platforms.
I was given the task this year to develop a utility app with MOAI (Rauchfrei Durchstarten - a German-language app to assist smokers with quitting their habit, available in Google Play as well as iOS appstores) and while I wouldn't consider it the prettiest looking app in the world, it was definitely a rewarding experience. The project fell into the "learn on someone elses' dime" angle a little, though, so there was a bit of a controversy over the use of MOAI for this project by the company that requested it. However, the job got done. I'd get it done a lot faster, slicker, and with better results now, if I had to do it again (also, I would definitely not work with the original designer, who didn't understand much of what MOAI can do and enforced a rather bland set of rules on me in their work).
But .. one of the great things about MOAI right now is that there are a lot of 3rd-party frameworks popping up specifically for MOAI, which at face value might seem a little unusual since MOAI itself is supposed to be all you need as a framework - but products such as Hanappe, moaigui, and Rapanui are all frameworks for MOAI which give it advanced features with ease - such as scrolling listviews, buttons, dialogs, menus, etc. All of this on top of a very powerful games system which provides a lot of high-end features (Grids, Tiledmap support, pathfinding, etc.)
If you take the stance that MOAI is a lower-level framework which provides performance and portability in an extremely tight package, then add any of the above frameworks (please use Google to find them, they're easy: "hanappe moai" will get you there..) you may see that MOAI can be built upon with extreme power returns.
On the other hand, if you absolutely have to have a native UI, and the idea of implementing your own characteristic UI elements seems abhorrent, then you might get tripped up a bit. I (almost) did, with the Rauchfrei app, anyway .. in that case I used moaigui to provide form and basic UI elements. From a broader standpoint, it probably would've been better to build the app in HTML5 with Titanium or something .. but again, I wouldn't have learned much MOAI that way. ;)
As for whether MOAI is practical for complex mobile apps, I would say it really depends on your competence level, in general. I would say that MOAI is not something that beginning/slightly-less-than-fully-confident programmers should pick up and try to use, if they are under the gun.
But if you actually like the idea of inventing new paradigms and delivering them in a cross-platform package, then MOAI can kick some really serious ass. Since most of the mobile GUI these days is derived from gaming-style interfaces, it could be that a MOAI-based app will revolutionize the mobile market soon enough. Its certainly a great gaming toolkit, and in that sense, would make for some nice new paradigms to be invented ..
You've really sold me on this. Just letting you know. This is something that I would have liked to see or bite on for a while now. Didn't know something like this existed already. I haven't heard exactly good things on developing with Marmalade or PhoneGap from friends or HN. This sounds like a much better alternative to both.
It doesn't have the "batteries included" philosophy of languages like Python, where there's already a ready-to-use library for everything. If you want to do anything non-trivial with Lua, you'll probably have to touch some C code.
Others have contributed good answers to this question in your sub-thread, so I will just clear this up for you:
domain tool - a tool which can be applied to, and adjusted according to the needs of the domain of your application. You can embed the Lua VM in your 'broader application' and use it as an internalized scripting language. If, for example, you've got a C/C++ program which sorts apples, you can export those apples, with bindings, into the Lua VM and reference them from the Lua language as apples, just the same ..
board-spectrum scripting language - python, perl, lua. Each of these can be (usually) found in /usr/bin somewhere, as a single executable that can be used to run the language scripts. Lua has a part in this role too - I frequently write system-admin scripts using #!/usr/bin/lua, but in this particular case I don't have any 'customizations' - I get only whats built-in to the Lua executable as shipped by my distribution.
But, beyond this, I could take the Lua VM and build my own domain tool with it. See the difference now?
I don't know what you mean by C being extreme .. for me, its not extreme at all, and rather I find working in C to be a comfortable, rewarding experience pretty much 100% of the time. But being able to extend beyond the scope of a static C application, and instead using C as the library/foundation for functionality that is exploited in Lua, is what I'm talking about. Bascially, you can have 'vanilla' Lua - /usr/bin/lua - or you can have "branded Lua" where, by branding, I mean - you add your own flair as you desire, to the Lua VM, using C (or C++).
MonoTouch/MonoDroid are not in my area of interest, since I prefer to have full sources available for everything, and I see no reason to buy such a tool when, with Lua+C/C++ I can just as easily build it myself ..
Languages are among the more religious topics in modern computing, and sometimes it can be hard to talk about them without immediately provoking a tedious flame war. I can only say that for myself, after programming for twenty years in every hot language you can name (really), that I now find myself going back to C more and more.
Why? Because while others are writing Javascript-like languages that compile to Javascript, debating the merits of Rust vs. Go, insisting that a language without monads is unusable, arguing over whether a language without support for multiple inheritance can be "real OO," claiming that homoiconicity leads to mystical union, and proudly stating that x language is "almost as fast as C" -- while all of this is going on, there are legions of people who have been quietly building nearly every significant piece of the infrastructure of modern computing in a language that is now over forty years old.
They don't whine, they don't go chasing after "frameworks," and they don't succumb to the twenty-one-days model of learning. They just build shit. Lots and lots of it. In fact, you're all soaking in it.
Like I said, it's a religious subject. But as for me and my house, we will use C.
I read these types of comments by some people from time to time, usually from programmers who have been in the industry for a long time. They're always semi-derogatory towards new technology and carry a disclaimer the author has used said technologies. But, always, old ones are better. Usually for the sake of being older, not any inherent quality the comment has elicited.
I love C above almost all languages you subtly referenced. I disagree with a lot of what you said, and a lot of what you implied. I just wish I never become that guy.
After a long and painful apprenticeship, you finally realize that being a great programmer has nothing to do with the language or its features. You get to the point where you're completely done learning this or that new technique (which, by the way, is nearly always an old thing for someone else). Your "profound enlightenment experience" with Lisp is ten years old; your copy of Design Patterns looks like an old Bible; you've had dozens of dirty weekends with stack-based languages, declarative languages, experimental languages, etc. You've been all of those places and done all of that. Many, many times.
At that point, the pros on cons have fully evened out, and what you want is a tool that you can completely and totally master. You've stopped searching for tools without warts, edge cases, and gotchas, because you know those do not exist. What matters is community, ancestral memory, stability, maturity, docs. Above all, you just want to build something really great.
It is at that point that you become that guy. You might well turn to C, as many before you have. Maybe, maybe not. Lots of us have.
We aren't crusty old neanderthals, though. We're just at the logical end of the process.
I've been a C programmer since 1984, and have been happily ignoring a lot of new-school language fads as a consequence. I've found that there's very little reason not to use C - it is small, it is fast, and with 30+ years of experience now, I don't get tripped up at all by any of its thorns.
For me, now, new school C can be expressed in one word: Lua.
Everything I need C++ to do, I can get Lua+C-bindings to do easier, with less fuss, and more productively in the short as well as long run due to the ease of use of the language. For almost every modern project I've worked on, the combination of a powerful C library collection and the Lua VM wrapper to make it all scriptable has been absolutely unbeatable.
Sure, I write Python .. I've done more than my share of Ruby projects, and I could definitely spend a few years working in pure C++ if it were necessary. But the point is that none of these languages have been necessary recently, with my Lua+C skills ..
I'm curious, could you write a bit more about how you do this? Do you write Lua programs with performance-intensive parts written in C? Or more like C programs with some component wiring in Lua? Where do you draw the line, and why? And what kind of software do you make this way?
At my last job (developing cross-platform mobile games for iOS & Android), we wrote our game engine in C++ with Cocos2d-X, with Lua bindings for every OS feature, such as in-app purchase, HTTP client, all Cocoas2d objects, etc. with all of the game logic written entirely in Lua. As long as you stay entirely in Lua, the performance is excellent. The only time you take a hit is when you call between Lua & C++.
Generating the Lua bindings is very easy - there's a tool called ToLua++ which reads a simplified C++ header and generates the C code for the bindings.
Yes, you can write some functionality in C/C++, and export it to the Lua VM, and access those features from within the Lua language environment. This is really awesome!
Look underneath the covers (i.e. build MOAI from sources) and you'll see all of the techniques I'm promoting, albeit in an open, easy to understand - and DAMNED POWERFUL - form.
MOAI is a cross-platform game development toolkit, but since its been designed by talented engineers with great ideas, it could as easily be used to build a next-generation Operating System, akin to what is happening with Android and the Dalvik VM.
The Linux Kernel + MOAI(Lua) would be a formidable challenger to the Android eco-system, imho, given a little more love .. I've already gotten my rPi booting directly to the MOAI runtime, anyway, and its a much more fun way of writing apps for the thing than anything else right now, at least in my opinion. With the advantage that those same apps, with very, very little effort, will run on Windows, Linux, OSX .. iOS .. Android .. Chrome Native Client .. and so on ..
MOAI is an example of the ways in which the LuaVM can actually fulfill the promises made back in the Java honeymoon days ..
Write a nice, clean C library and export a decent header. Then, use the tolua tool to read that header and generate the bindings for the Lua VM. Build the LuaVM with those bindings, and suddenly you have the features of your C library available to you within the Lua language environment.
There are some great tutorials on this, here are a few I found with google-fu:
I think you will see, once you study the above, that there is a vast amount of power available to you. The Lua VM, written itself in pure C, is an utter joy to behold, and I encourage every new C programmer to inspect the Lua VM sources (its only about 12 files..) and see for yourself just what it has to offer you. C+Lua can be a very, very powerful tool .. and anyone learning C today would benefit greatly from also doing some Lua bindings, themselves, to see what I mean ..
Why do people act surprised that C and C++ are still as viable as ever, and still very widely used?
Do they not know that basically every important piece of software today is still written in those languages?
Has this realization somehow been lost over time with the rise of languages like Java, PHP, JavaScript and Ruby, even though the major implementations of those languages are themselves built using C and/or C++?
I think a lot of people are surprised that C and C++ are still widely misused. Certainly for a lot of domains, their performance characteristics are not only attractive, but essential to the success of the application. But in many other domains, they could be replaced with other languages that would offer performances that are just as good, and be safer.
For example, I go on IRC with weechat, which is written in C. An IRC client spends most of its time idling, waiting for network data or input from the user, so certainly it could be written in a language like OCaml.
IRC clients are a bad example considering that they are often used on hosts that you can't count on having much more than a C compiler. Super-lightweight VPSs, free accounts on school servers, etc.
Nobody wants to boostrap another language in their $HOME just to use IRC.
No, this is not true. Until C++ can reason carefully about aliasing, it can't be memory-safe. Iterator invalidation and reference invalidation are the key problems here.
Rather than repeat the code examples in detail, I'll link to some earlier examples I've shown:
shared_ptr helps, but it won't cure your circular dependency blues.
When do you allocate on the stack vs. on the heap? It matters, and is something you don't have to think about in Python or Java or C#.
Are you passing by value or by reference? Are you passing a pointer by value?
What's the actual behavior when you use the unary * operator? The copy constructor? The assignment operator?
When you use MyType foo = 3;, what's actually happening? Did you mean to use the implicit constructor, or was that a programming error?
And then you have all the warts related to the single-pass compilation and inlining of headers because that's the way C++ does things...and let's not get into the misfeatures that were brought over due to C compatibility, like macros.
So no, C++ is not as safe as any language. It is manifestly unsafe. It can be a bit easier to manage its complexity with C++11, but it's still far from safe. And C++11 makes the language even larger and more complex...
Circular references are a problem in other languages, too. The existence of Java's java.lang.ref.WeakReference, or .NET's System.WeakReference, or Python's weakref illustrate this.
When it comes to heap or stack allocation, you may not even have a choice, depending on the language being used. At least C++ gives you the option of choosing in many cases. And C# does allow for both heap and stack allocation, by the way.
Likewise for the other issues you mentioned. Sure, they may have been a problem for some developers in the early 1990s. Things have changed. The language has evolved to offer ways of dealing with such scenarios, often in a much safer fashion. We now have rvalue references, move constructors, the explicit keyword, and other functionality at our disposal now.
It's not at all difficult these days to write high-level C++ code that's safe and robust. The best part is that it offers all this, without taking away the power to go deeper or to do more complex or riskier things, if the need arises. But in no way are you forced to do things unsafely these days.
Weak references are more for keeping things like hash keys from creating a reference than for preventing your program from segfaulting.
Object lifecycle and reference counting are two completely different things that only appear to be related. Even if you figure out your C++ auto_smart_magic pointers, you still have to decide when the object you're referring to is no longer necessary, just like you would in Python or Java.
I'm not sure what you're getting at with rvalue references and move semantics. They are quite unsafe. The language does nothing to prevent you from using a moved value after you've moved it.
Also, circular references are not a problem in managed languages the same way they are in C++. Managed languages automatically clean up cycles. C++ requires careful use of weak pointers to deal with cycles.
A quick example of where circular references are an issue in C++, but weak references aren't needed in other languages. Consider a doubly-linked list, where you remove two nodes from the middle, and those nodes are not reachable from anywhere else. Each node, however, still has a reference to the other:
X<->Y
In any system with a full GC, those nodes will get freed; in a reference counting system, those nodes will stay around forever.
But why have you designed your interface in such a way that this could happen?
A normal interface would allow you to remove one node at a time and this wouldn't occur.
A more advanced interface may allow you to remove 2 nodes, but would return a vector or a new list to ensure proper deallocation.
Don't get me wrong; you have to think with C++ and generally know what you're doing. However, you can write safe and high level C++ and I don't necessarily think thinking about your design is a bad thing.
Modern c++ can be as safe as any language given rigorous guidelines and methods. But X project put together with "programmers off the street" is likely to involve some portion of crashes through sloppy coding.
The thing is that you have to clarify safety and danger. In more application than people would like to acknowledge, slowing down to the point of unusability can be as much of a dangerous result as crashing with a null pointer. Slowness has been standard quality of Python and Ruby development as much as null pointers have been a standard quality of c/c++ development. Either can be avoided with proper care but these all somewhat different failure modes.
The one thing is that c++ do present a different kind of security risk than Ruby/Python slow-downs or even Ruby/Python crashes. Despite this, a some point you still have to consider the tradeoffs.
Considering how difficult it is to handle errors in C++, I am not sure you can say it is just as safe as "any" language. Exceptions can be used, except that they cannot propagate out of destructors, they should not propagate out of constructors, and there is no standardized way to retry the operation that threw the exception.
"you need to find a good tutorial that doesn't go low level before you need it."
In other words, don't program in C++ at all. The default numeric type in C++ is fixed-width (int or floating point), the default string type is a primitive pointer (const char *), and you still see low-level pointer types tossed around as iterators (e.g. for the standard vector class).
The iterator for a vector is a class... although quite a thin wrapper around a pointer, it is still a class.
The difficulty to handle errors in C++ is generally bad API design left over from the C days. Exceptions used correctly make error handling exceptionally easy.
Your criticisms would be well taken if they were actually true:
>Exceptions can be used, except that they cannot propagate out of destructors
Exceptions are perfectly well allowed to propagate out of destructors. A destructor throwing an exception is only considered ill advised, because destructors are called during stack unwinding when another exception is propagating, and throwing an exception during another exception causes program termination. And object destruction is largely incapable of truly failing anyway: There will be no state remaining to be left inconsistent because the object is going away, unless the destructor is interacting with an object not being destroyed, in which case the error is actually with the other object and information about the error can be stored with that object or in an error queue and handled in due course after the stack unwinding is complete.
Moreover, what's your solution to the problem? If the destructor is doing nothing but deallocating memory (and calling other destructors that only deallocate memory) then it never needs to throw and there is no problem. If it's doing something else then it's either doing something that another language wouldn't allow because it has no destructors (in which case you can refrain from doing it in C++ as easily as you can switch to one of those languages), or the other language has an equivalent to destructors and then will have the same issue with throwing during exception propagation. Do you see some solution to having it both ways?
>they should not propagate out of constructors
Rubbish. Constructors don't even have return values. Throwing exceptions is the canonical way of indicating construction failure.
The primary benefit to non-throwing constructors is that it allows certain performance optimizations. For example, if a constructor throws during a std::vector resize operation, the vector class will undo everything that had been done during the resize operation so as to leave the vector in a consistent state. This is easy with copy constructors: Just destroy the copies and keep the originals. But if vector used the move constructor to move the existing elements to the newly allocated internal array then the originals are no longer valid. So vector's resize will only use the move constructor instead of the copy constructor if the move constructor is declared noexcept (or the compiler can statically determine that it doesn't throw), since the alternative would violate the vector's ability to maintain consist state when a move constructor throws an exception.
>there is no standardized way to retry the operation that threw the exception.
How about catching the exception, addressing it, and retrying the operation in the try block?
What do other languages do that you feel is superior?
> The default numeric type in C++ is fixed-width (int or floating point)
Why do you believe this to be a serious limitation? The range of a 64-bit type is more than sufficient for the overwhelming majority of applications and for the few remaining with specialized needs (you know who you are, cryptographers and mathematicians), arbitrary precision libraries are readily available.
> the default string type is a primitive pointer (const char *)
The default string type is std::string. Which can even be used with the large bulk of the C library through the helper function string::c_str() which provides a "safe" temporary const char array for passing to C library functions.
"Exceptions are perfectly well allowed to propagate out of destructors"
...and the default behavior is program termination.
"destructors are called during stack unwinding when another exception is propagating, and throwing an exception during another exception causes program termination"
That was true in C++98. In C++11, the default behavior of a destructor is to call std::terminate if an exception propagates out of the destructor. This must be explicitly overridden by the programmer in order to get C++98-style behavior, which is undefined (not sure if that is better or worse than being defined as "unconditional program termination").
This does not need to be a problem. If the stack were unwound after the catch block exits (or if it explicitly unwinds the stack), there would be no double exception faults. This is not something that needs to be inefficient, nor does it need to impact performance any worse than exceptions do right now, and it has already been implemented in other languages (e.g. Common Lisp).
"what's your solution to the problem?"
See above.
"If the destructor is doing nothing but deallocating memory (and calling other destructors that only deallocate memory) then it never needs to throw and there is no problem"
Except that destructors can free other resources, and some of those other resources might throw exceptions when they are freed. A destructor might, for example, close a file; if the file cannot be synchronized when it is closed, an exception should be thrown (but the C++ standard actually requires that such an exception be ignored; after all, the programmer's job is to manually close files, right?).
"If it's doing something else then it's either doing something that another language wouldn't allow because it has no destructors (in which case you can refrain from doing it in C++ as easily as you can switch to one of those languages), or the other language has an equivalent to destructors and then will have the same issue with throwing during exception propagation. Do you see some solution to having it both ways?"
See above! There should not be any program code that cannot safely throw an exception to indicate an error, and the solution is to change the order of stack unwinding and catch block execution. C++ is not the only language with a double-exception problem; Java and Python have this problem also (e.g. in Java, if you have try...finally without any catch and the finally block throws an exception). The other benefit of this approach, aside from solving the double exception problem, is that it makes exceptions more useful by making "restarts" possible: the catch block can resume execution from some point defined by the thrower of the exception, which helps a lot with encapsulation (if e.g. the error can be corrected, but only the client code can correct it; for example, writing to an external disk that was disconnected before the operation could complete).
The only complication is that the catch block needs a stack of its own, and that the catch block needs to have a pointer to the stack frame it should return to if the rest of the stack should be unwound. These are not major challenges, certainly not by comparison with the other challenges C++ compiler writers need to deal with.
"Rubbish. Constructors don't even have return values. Throwing exceptions is the canonical way of indicating construction failure."
Oh, not having return values justifies using exceptions? Destructors don't have return values. How do destructors indicate failures?
Constructor exceptions are bad because they can cause program termination when objects are constructed in the global scope. Global objects are generally bad, of course, since their construction order is undefined, but they are allowed and they are sometimes used by the standard (e.g. cin, cout). This is not a hard one to solve: give programmers a way to catch exceptions that are thrown before main is called or after main exits.
"How about catching the exception, addressing it, and retrying the operation in the try block?"
Except that forces the client code to understand how to retry an operation, which breaks encapsulation, makes error recovery more complicated (if the exception is thrown in the middle of writing some record, you now need a way to either roll back the write, figure out where the incomplete record begins, or figure out where the last write operation failed).
"What do other languages do that you feel is superior?"
Common Lisp's "conditions" system, for one, or any language that supports continuations.
"The range of a 64-bit type is more than sufficient for the overwhelming majority of applications"
Except that integer overflow vulnerabilities are common and problematic, and occur with 64-bit integer types. Saying that 64 bits is enough for any application is kind of like saying that 640k is enough RAM for any application. If you want to use integer arithmetic, use the integers -- which means arbitrary width, or else an exception being thrown if you try to represent an integer that is too big.
"for the few remaining with specialized needs (you know who you are, cryptographers and mathematicians), arbitrary precision libraries are readily available."
Having code that does not do unexpected things is not exactly a "specialized need." It is pretty easy for a programmer to forget that the sum of two positive numbers could actually be negative, or that the product of two positive numbers could be positive but less than the two factors. Most people think of integer arithmetic in terms of integers, not two's complement, even when they are programming in a language with fixed-width arithmetic -- and it takes extra mental effort to remember that you are not really dealing with integers (mental effort that could be used for other things).
Sure, you can use a library -- but then you need to be explicit about wanting arbitrary precision, and you don't get any automatic optimization from your compiler (e.g. if your compiler can prove that some value will always be in a particular range, it might remove an arbitrary precision operation and replace it with a fixed-width operation without creating a risk of an overflow).
"The default string type is std::string"
Unless you declare a string constant, in which case you get a pointer to an array of characters. C++11 conveniently adds yet another way to get a pointer to an array of characters, "raw" string literals (at least it is appropriately named), on top of the typical double-quoted string and the C++03 wide-character string literal notation.
Even with std::string, bounds checking is not guaranteed. operator[] won't check bounds. String iterators can be incremented past the bounds of the string without any exceptions being thrown. You can even try to dereference the placeholder iterator returned by string::end(), with conveniently undefined behavior (would any definition actually make sense for some application or platform?). You have at(), of course, which will do bounds checking but which breaks the [] syntax you get with C-style strings, arrays, maps, and other types (at least you have the same interface as std::vector, which might be useful).
This is the sort of brittleness that characterizes C++ and that makes reliable code hard to write. Everything looks great, until you step over the poorly-indicated boundary that separates "good for C++" and "you'll regret this later."
>...and the default behavior is program termination.
Because it's generally a bad idea. But you indicated it couldn't be done, which is different.
>This does not need to be a problem. If the stack were unwound after the catch block exits (or if it explicitly unwinds the stack), there would be no double exception faults.
But it would bring all kinds of new trouble:
void lock_and_throw(std::mutex& m)
{
std::lock_guard lock(m);
throw exception();
}
class mutex_wrapper
{
private:
std::mutex *m_ptr;
public:
mutex_wrapper() {
m_ptr = new std::mutex;
try {
lock_and_throw(*m_ptr);
} catch(…) {
// undo incomplete construction as destructor will not be called
delete m_ptr;
throw;
}
}
~mutex_wrapper() { delete m_ptr; }
}
If you unwind the stack after the catch block, the lock_guard destructor inside of lock_and_throw() gets executed after the mutex has already been deleted. So either the mutex_wrapper constructor can't delete m_ptr from the catch block (and then where is it supposed to do it when it wants to re-throw the exception?) or lock_and_throw can't assume that an otherwise valid argument passed to it won't be deleted out from under its stack variables' destructors whenever any exception is thrown.
And if you run the catch block and then go back and unwind the stack, it still leaves the question of what happens when a destructor throws during stack unwinding. Do you call the same catch block again (and then have to check for possible double free etc.), or do you let an exception that was surrounded by catch(…) escape out to the calling function just because it was already called once? Neither seems pleasant.
>Constructor exceptions are bad because they can cause program termination when objects are constructed in the global scope. Global objects are generally bad, of course, since their construction order is undefined, but they are allowed and they are sometimes used by the standard (e.g. cin, cout). This is not a hard one to solve: give programmers a way to catch exceptions that are thrown before main is called or after main exits.
That's a fair suggestion for how to improve the language, but to go from there to arguing that constructors shouldn't throw exceptions doesn't follow. The conclusion should be instead that global variables shouldn't be instantiated using constructors that throw exceptions unless program termination is the desired result when such an exception occurs -- which it actually is in most cases, because having uninitialized global variables floating around is unwise to say the least.
Also, if it should ever become an actual problem in a specific instance, there is always the possibility of initializing the object with a different, non-throwing constructor and then having main reinitialize it on startup and catch any exceptions.
>Saying that 64 bits is enough for any application is kind of like saying that 640k is enough RAM for any application.
I didn't say any application. There are obviously some applications that require arbitrary precision. But most applications never need to count as high as 2,305,843,009,213,693,952. It's a pretty big number. If you counted a billion objects a second you still wouldn't get there for hundreds of years. So why pay for it constantly if you so rarely need it?
>Sure, you can use a library -- but then you need to be explicit about wanting arbitrary precision, and you don't get any automatic optimization from your compiler (e.g. if your compiler can prove that some value will always be in a particular range, it might remove an arbitrary precision operation and replace it with a fixed-width operation without creating a risk of an overflow).
The cost of not using fixed precision the times where the compiler can't optimize it out is guaranteed to be worse than the cost of not being able to optimize it as often when you've specifically asked for arbitrary precision. Moreover, who says the compiler can't optimize out the arbitrary precision library calls in cases where the values are known at compile time just because the library isn't an official part of the language? The calls are likely to be short enough to be inlined and then if all the values are static the optimizer has a good shot at figuring it out.
>Unless you declare a string constant, in which case you get a pointer to an array of characters.
String literals as const char arrays are mostly harmless because the compiler ensures that they're "right" -- if they're const then you can't accidentally go writing past the end because you can't accidentally write at all, and the compiler puts the '\0' at the end itself so that silly functions that keep reading until they find one will find it before running out of bounds. And likewise for c_str() from std::string.
Moreover, bounds checking is pretty easy if you want it:
template<class T, class E = T::value_type>
class safe : public T
{
public:
E& operator[](size_t index) {
if(index < size())
return T::operator[](index);
else
throw out_of_bounds_exception();
}
// and so on for other common accessor functions
}
safe< std::basic_string<char> > s;
safe< std::vector<int> > v;
// etc.
But naturally then you have to pay for it in performance.
"If you unwind the stack after the catch block, the lock_guard destructor inside of lock_and_throw() gets executed after the mutex has already been deleted. So either the mutex_wrapper constructor can't delete m_ptr from the catch block (and then where is it supposed to do it when it wants to re-throw the exception?) or lock_and_throw can't assume that an otherwise valid argument passed to it won't be deleted out from under its stack variables' destructors whenever any exception is thrown."
That sounds like the sort of memory management problem that smart pointers are supposed to solve. It looks like a smart pointer would be exactly the sort of thing you would want here: when and if the stack is ultimately unwound, the lock_guard object will be destroyed first, then the smart pointer (because unwinding a constructor will cause the destructors of member objects to be invoked; note that your code throws the exception out of the constructor, and so the stack would have to be unwound at a higher level catch). The problem is not with unwinding the stack at the end of the catch block (which would not even be reached in your example, because of the throw in the catch block); the problem is that you explicitly deleted the pointer before the stack would have been unwound.
"And if you run the catch block and then go back and unwind the stack, it still leaves the question of what happens when a destructor throws during stack unwinding. Do you call the same catch block again (and then have to check for possible double free etc.), or do you let an exception that was surrounded by catch(…) escape out to the calling function just because it was already called once? Neither seems pleasant."
Really? It sounds like the second option would be what you would want: if the stack was unwound implicitly after the body of the catch block had executed, and unwinding the stack caused an exception to be thrown, then the exception was thrown out of the catch block. How is that unpleasant?
"The conclusion should be instead that global variables shouldn't be instantiated using constructors that throw exceptions unless program termination is the desired result when such an exception occurs -- which it actually is in most cases, because having uninitialized global variables floating around is unwise to say the least."
Except that a failure to initialize should at least be reported to the user. Maybe you could not get a network connection, or you could not allocate memory, or there is a missing file -- whatever it is, the user should know, and the thrower of the exception should not be responsible for telling the user. If you have restarts, you get something better -- you get the chance to try the operation again, which might be good if you have a long start-up process.
"Also, if it should ever become an actual problem in a specific instance, there is always the possibility of initializing the object with a different, non-throwing constructor and then having main reinitialize it on startup and catch any exceptions."
In other words, all classes should provide a non-throwing constructor and an initialization routine, because any object might be constructed in the global scope.
"There are obviously some applications that require arbitrary precision. But most applications never need to count as high as 2,305,843,009,213,693,952."
It is not just about counting high. If an application adds or multiplies two numbers, there is a chance of an overflow. If the application is reading one of the operands from the user, that overflow could be a security problem -- such problems are frequently reported.
"So why pay for it constantly if you so rarely need it?"
You don't have to pay for it constantly; you can have fixed-width types as something the programmer explicitly requests, or as something the compiler generates as an optimization. The real question is, why should the default type be the least safe, and why should programmers have to work harder to get a natural and safe abstraction?
"who says the compiler can't optimize out the arbitrary precision library calls in cases where the values are known at compile time just because the library isn't an official part of the language? The calls are likely to be short enough to be inlined and then if all the values are static the optimizer has a good shot at figuring it out."
Can you name a C++ compiler that does this?
"String literals as const char arrays are mostly harmless because the compiler ensures that they're "right" -- if they're const then you can't accidentally go writing past the end because you can't accidentally write at all"
You can accidentally read past the end, and you can accidentally print what you read. That can cause a lot of problems. There is no requirement that people use the standard library to iterate through a string.
"Moreover, bounds checking is pretty easy if you want it"
Once again, the programmer has to do extra work just to get something safe, because the default semantics are unsafe. If bounds checking is so easy, why not make it the default, and have unchecked access be an option for cases where speed matters? You already have at() and operator[] -- all that is needed is to switch which one of those does the bounds check.
>That sounds like the sort of memory management problem that smart pointers are supposed to solve.
Using a smart pointer would solve that specific problem, but then you're de facto mandating the use of smart pointers in every code block that an exception could be thrown through, which is every code block.
And what if mutex_wrapper is a smart pointer class? Do I now need to use a smart pointer to implement my smart pointer? Turtles all the way down? Or take operator new, which isn't an object at all so doesn't have a destructor, but still has to deallocate the memory it allocated if the constructor it calls throws, so it would need a "special" smart pointer object to use internally that only deallocates but doesn't call the destructor. It's not just calling destructors -- it's any cleanup operations because you can't do any destructive cleanup from a catch block anymore. So you end up requiring atomic RAII, and then none of the code actually implementing RAII can safely throw. It feels like just moving the problem: Now instead of not being able to throw from a destructor, you can't throw from code between resource allocation and turning on the destructor. Which is very close to saying you can't throw through a constructor.
>Really? It sounds like the second option would be what you would want: if the stack was unwound implicitly after the body of the catch block had executed, and unwinding the stack caused an exception to be thrown, then the exception was thrown out of the catch block. How is that unpleasant?
void foo()
{
std::vector<destructor_always_throws> v;
v.reserve(1000);
for(size_t i = 0; i < 1000; ++i)
v.emplace_back(destructor_always_throws());
}
Now I've got a thousand destructors that are going to throw one after the other as soon as the function returns. The nearest catch block will catch the first one, then go back to unwinding the stack where the next one throws. The next nearest catch block will catch that one and then go back to unwinding the stack again. Pretty sure I'm going to run out of catch blocks eventually, but I'd like to be able to catch all the destructor exceptions somehow and not end up with program termination, since that was supposed to be the whole idea.
>Except that a failure to initialize should at least be reported to the user.
Which it is:
$ ./a.out
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Granted the message could be less cryptic, but we already know how to fix it. Don't call constructors that throw from global scope. It's pretty much the same thing as saying don't throw out of main(), because you get the same thing.
>In other words, all classes should provide a non-throwing constructor and an initialization routine, because any object might be constructed in the global scope.
Not any object, just the ones that have already been used that way thoughtlessly and need to be fixed without impacting existing code using the object.
If you just need a global in new code (and you're insistent on a global), make it a pointer or smart pointer and then use operator new in main to initialize it.
>If the application is reading one of the operands from the user, that overflow could be a security problem -- such problems are frequently reported.
All user input needs to be validated. The language doesn't matter. If you're using a language that provides arbitrary precision, the user can instead provide you with a number which is a hundred gigabytes long and will take a hundred years for your computer to multiply. "If num > threshold then reject" is going to be necessary one way or another.
>The real question is, why should the default type be the least safe, and why should programmers have to work harder to get a natural and safe abstraction?
Because it's faster. The language is old, being fast was important then, and sometimes it still is today. But given that both types are available, isn't your complaint more with the textbooks that teach people to use the faster, less safe versions rather than the slower, safer versions by default? Or do you just not like that the trade off between runtime checking and performance is even allowed?
>You can accidentally read past the end, and you can accidentally print what you read. That can cause a lot of problems. There is no requirement that people use the standard library to iterate through a string.
There is no requirement that people don't write code that says "uid = 0" where it should say "uid == 0" either. People who write bad code write bad code. I understand that languages can and should help you avoid doing things like that, but at some point, when everybody says "don't do that" and you do it anyway, you get what you get. Most languages allow you to call C libraries from them, and if you call them wrong you get the same unsafe behavior. Does that make all those languages too unsafe to use too?
"a smart pointer would solve that specific problem, but then you're de facto mandating the use of smart pointers in every code block that an exception could be thrown through, which is every code block."
Which is already what the standard does to closures: you are basically forced to use smart pointers and capture by value if you are returning a closure from a function.
"Or take operator new, which isn't an object at all so doesn't have a destructor, but still has to deallocate the memory it allocated if the constructor it calls throws"
So have the catch set a flag, save a copy of the exception, and then after the catch block the flag is checked; if the flag is set, free the memory and throw the copied exception. Or just give programmers a way to explicitly unwind the stack at any point in a catch block. Or give programmers a Lisp-style "restarts" system, and create a standard restart for freeing resources, so that resources will only be freed if no recovery is possible (and so that constructor exceptions can be recovered without having to reallocate resources).
The difference is in what errors can be handled: handling constructor exceptions well would require a bit more care if the stack were not unwound until the catch executes, but right now destructor exceptions cannot be handled at all unless you are willing to have a program terminate (which is the default behavior).
"Pretty sure I'm going to run out of catch blocks eventually, but I'd like to be able to catch all the destructor exceptions somehow"
As opposed to the current situation, where your program would terminate without ever reaching a catch block? This sounds like another case where restarts would be handy: stack unwinding could set a restart (or the vector destructor, since the objects are being destroyed there), so that one catch block could keep invoking a restart and then handle each exception until no objects remain. I can imagine cases where it would be better to ignore some errors until a particular resource is freed than to have a program quit or to allow a low-priority error to prevent that resource from being freed.
So if your point is, "Catching before the stack has been unwound necessitates a restart system," I can agree to that, especially since catching before unwinding makes restarts possible.
"Granted the message could be less cryptic,"
It could also be completely useless. What if there is no terminal? What if the user is only presented with a GUI? The default exception handler has no way to know what sort of user interface a program will have, so it has no reliable way to present errors to users, let alone to allow users to correct errors.
"we already know how to fix it. Don't call constructors that throw from global scope."
Which is basically saying that all classes need a non-throwing constructor, or that you should never have a global object (not necessarily a bad idea, but people still create global objects sometimes). A better idea, which I think we agree on, would be to give programmers a way to handle exceptions outside of main.
"All user input needs to be validated"
OK, sure. Except that people do not always validate input, which is how we wind up with bugs. Input validation adds complexity to code, and like all things that involve extra programmer work, it is likely to be forgotten or done incorrectly somewhere.
"The language doesn't matter"
Sure it does, because the languages decides whether or not forgetting to validate some input will result in the program terminating (from an exception) or the program having a vulnerability (because it will use bad input). If there is no bounds checking on arrays, failing to validate input that is used as an array index is a vulnerability. If there is no error signalled when an integer overflows, failing to validate input that is used in integer arithmetic is a vulnerability.
I think experience has shown that it is easy for programmers to forget about validating and sanitizing input, and that it is easy for programmers to validate or sanitize input incorrectly. SQL injection attacks can be prevented by either (a) sanitizing all inputs or (b) using parameterized queries; it is hard to argue that (a) is a superior solution to (b), because there are fewer things to forget or get wrong with (b).
"the user can instead provide you with a number which is a hundred gigabytes long and will take a hundred years for your computer to multiply"
Sure, and then they can trigger a denial of service attack. And then the admin will see that something strange is happening, kill the process, and take some appropriate action, and that will be that. Denial of service attacks are a problem, sure, but it is almost always worse for a program to leak secret data or to give an untrusted user the ability to execute arbitrary commands -- especially when the user might do so without immediately alerting the system administrator to the problem (which spinning the CPU will probably do). It is also worth noting that a vulnerability that allows a remote attacker to run arbitrary code on a machine gives the attack the ability to run a denial of service attack (the attacker could just use their escalated privileges to spin the CPU); denial of service does not imply the ability to control a machine or read (or modify) sensitive information.
"isn't your complaint more with the textbooks that teach people to use the faster, less safe versions rather than the slower, safer versions by default?"
It's not just the textbooks; it is the language that encourages this. It is harder to use arbitrary precision types than to use fixed-width types in C++, because the default numeric types are all fixed width. That, in a nutshell, is the problem: C++ encourages programmers to write unsafe code by making safe code much harder to write. It is not just about numeric types: bounds checking is harder than unchecked array access, it is easier to use a primitive pointer type, it is easier to use a C-style cast, etc.
It would not have been hard to say that "int" is arbitrary precision, and to force programmers to use things like "uint64_t" if they want fixed-width (and perhaps to have a c_int type for programmers who need to deal with C functions that return the C "int" type). It would result in slower code for programmers who were not paying attention, but that is usually going to be better than unsafe code (can you name a case where speed is more important than correctness or safety?). Even something as simple as ensuring that any integer overflow causes an exception to be thrown unless the programmer explicitly disables that check (e.g. introduce an unsafe_arithmetic{} block to the language) would go a long way without forcing anyone to sacrifice speed.
"People who write bad code write bad code"
It's not just bad code; people can forget things when they are on a tight deadline. It happens, and languages should be designed with that in mind.
"at some point, when everybody says "don't do that" and you do it anyway, you get what you get"
I think there is a lesson from C++98 that is relevant here. Everyone says not to use C-style casts, and to use static_cast, dynamic_cast, or const_cast instead (or reinterpret_cast if for some reason it makes sense to do so), yet you still see people using C-style casts. It is just less difficult to use a C-style cast: less typing, less mental effort (there is no need to choose the correct cast), fewer confusing messages from the compiler, etc. Likewise, people continue to use operator[] in lieu of at()/iterators, because it is less effort.
Blaming programmers for writing bad code when that is the easiest thing for them to do is the wrong approach (but unfortunately, it is an approach that seems common among C and C++ programmers). The right approach is to make writing bad code harder, and to make writing good code easier.
"Most languages allow you to call C libraries from them, and if you call them wrong you get the same unsafe behavior. Does that make all those languages too unsafe to use too?"
No, because most languages with an FFI do not require you to use the FFI, nor do they encourage you to do so. The two FFI's I am most familiar with are JNI and SBCL's FFI, and both of those require some effort to use at all. One cannot carelessly invoke an FFI in most languages; usually, a programmer must be very explicit about using the FFI, and it is often the case that special functions are needed to deal with unsafe C types. You could be a fairly successful Java programmer without ever touching the FFI; likewise with Lisp, Haskell, Python, and other high-level languages.
I am actually a big fan of languages with FFIs, because sometimes high-level programs must do low-level things, but most of the time a high-level program is only doing high-level things. FFIs help to isolate low-level code, making it easier to debug low-level problems and allowing programmers to work on high-level logic without having to worry about low-level issues.
It is worth noting that there is nothing special about C. You can take high-level languages and retool their FFIs for some other low-level language, and the FFIs would be just as useful. You usually see C because most OSes expose a C API, and so low-level code for those systems is usually written in C. Were you to use Haskell on a system that exposes a SPARK API, you would want an FFI for SPARK, and your FFI would be less of a liability (since SPARK is a much safer language than C). So no, I do not think you can argue that having an FFI that allows code written in an unsafe language makes a high-level language unsafe; if a high-level program is unsafe because of its use of C via an FFI, the problem is still C (and the problem is still solved by either not using C or by only using a well-defined subset of C).
>Which is already what the standard does to closures: you are basically forced to use smart pointers and capture by value if you are returning a closure from a function.
Well, you have to make sure somehow that the thing a pointer is pointing to will still be there when the closure gets executed, sure. But the change your asking for would have a much wider impact. Anywhere you have a dynamically allocated pointer, or really anything that needs destruction whatsoever, without having an already-associated destructor would become unsafe for exceptions. Which is commonly the case in constructors. It's basically this pattern (which is extremely common) that would become prohibited:
What you would need to do is exclude exceptions from passing through any constructor that has an associated destructor, because the destructor wouldn't be called if the constructor throws and the catch block couldn't safely destroy the resources before the stack is unwound.
>So have the catch set a flag, save a copy of the exception, and then after the catch block the flag is checked; if the flag is set, free the memory and throw the copied exception.
Obviously it can be worked around, but can you see how quickly it becomes a headache? And now you're adding code and complexity to operator new, which is a good candidate for the most frequently called function in any given program.
>Or just give programmers a way to explicitly unwind the stack at any point in a catch block.
In other words, the existing functionality is good and necessary for some circumstances, but you want something different in addition to it.
It seems like you're looking for something like this:
void foo::bar()
{
connect_network_drives();
try {
do_some_stuff();
} catch(file_write_exception) {
check_and_reconnect_network_drives();
resume;
} catch(fatal_file_write_exception) {
// epic fail, maybe network is dead
// handle serious error, maybe terminate etc.
}
}
void do_some_stuff()
{
// …
file.write(stuff);
if(file.write_failed()) {
throw file_write_exception();
on resume {
file.write_stuff(stuff); // try again
// no resume this time if error not fixed
if(file.write_failed())
throw fatal_file_write_exception();
}
}
}
But if that's what you want, why do you need special language support, instead of just doing something like this?
// (this class could be a template for multiple different kinds of errors)
class file_write_error
{
private:
static thread_local std::vector< std::function<void()> > handlers;
public:
file_write_error(std::function<void()>& handler) {
handlers.push_back(handler);
}
~file_write_error() {
handlers.pop_back();
}
static void occurred() {
// execute most recently registered handler
if(handlers.size() > 0)
handlers.back()();
}
};
// (and then poison operator new for file_write_error
// so it can only be allocated on the stack
// and destructors run in reverse construction order)
void foo::bar()
{
connect_network_drives();
try {
file_write_error handler([this]() {
check_and_reconnect_network_drives();
});
do_some_stuff();
} catch(fatal_file_write_exception) {
// epic fail, maybe network is dead
// handle serious error, maybe terminate etc.
}
}
void do_some_stuff()
{
// …
file.write(stuff);
if(file.write_failed()) {
file_write_error::occurred(); // handle error
file.write_stuff(stuff); // try again
if(file.write_failed())
throw fatal_file_write_exception();
}
}
It seems like you're just looking for an error callback that gets called to try to fix a problem before throwing a fatal stack-unwinding exception is necessary. And it's not a bad idea, maybe more people should do that. But doesn't the language already provides what is necessary to accomplish that? Are we just arguing about syntax?
You could easily use that sort of error handler in a destructor as an alternative to exceptions as it is now. There is nothing the error handler can't do that a catch block could before stack unwinding. And if you call such a thing and it fails, the two alternatives of either ignoring the error or terminating the program are all you really have left anyway, because if there was anything else to do then you could have either done it in the destructor or in the error handler. (I can imagine that an error handler may benefit from being able to call the next one up in the hierarchy if any, analogous to 'throw' from a catch block, but that could be accomplished with minimal changes to the above.)
"In other words, the existing functionality is good and necessary for some circumstances, but you want something different in addition to it."
The problem with the existing approach is that the safety of throwing an exception from a destructor depends on the context in which the destructor is invoked, and there is no workaround for it. What I am proposing would ensure that the safety of throwing an exception would not be dependent on why the function throwing the exception was called; edge cases where this might be unsafe could either be worked around (perhaps in a headache-inducing way, but nothing close the headaches associated with destructors having no way to signal errors), or a language feature could be added to solve those problems.
"It seems like you're just looking for an error callback that gets called to try to fix a problem before throwing a fatal stack-unwinding exception is necessary. And it's not a bad idea, maybe more people should do that. But doesn't the language already provides what is necessary to accomplish that? Are we just arguing about syntax?"
Well, if we are arguing about programming languages, what is wrong with arguing about syntax? The language does provide enough features to manually implement restarts -- but if that is your argument, why do you even bother with C++? C gives you everything you need to implement any C++ feature; assembly language gives you all you need to implement any C feature. We use high-level languages because our productivity is greatly enhanced by having things automated.
Take exception handling itself as an example. We do not actually need the compiler to set it up for us -- we already have setjmp/longjmp, which are enough to manually create an exception handling system. The problem is that the programmer would be responsible to setting up everything related to exceptions -- stack unwinding, catching exceptions, etc. Nobody complains about exceptions being a language feature instead of something programmers implement by hand -- so why not add another useful language feature?
Rather than callbacks, what you really want for restarts is continuations. One way a Lisp compiler might implement the Lisp "conditions" system would be something like this: convert the program to continuation passing style; each function takes two continuations, one that returns from the function (normal return) and one that is invoked when an exception is raised. Each function passes an exception handler continuation to the functions it calls; when a function declares an exception handler, it modifies the exception handler continuation to include its handler (the continuation would need to distinguish between exception types; this can be done any number of ways), which would include the previous exception handler continuation (so that exceptions can be propagated to higher levels). Exception handler continuations will take two continuations: one to unwind the stack (which is used to "return" from the handler), and the restarts continuation that is used for invoking restarts. When an exception is thrown, the thrower passes a restart continuation (or a continuation that throws some exception if no restarts are available), and then the handler continuation will either invoke the appropriate handler or it will invoke the handler continuation from the next higher level.
Complicated? Yes, and what I described above is just the simplified version. It should, however, be done automatically by the compiler, and the programmer should not even be aware that those extra continuation arguments are being inserted. The stack unwinding continuation could potentially be exposed to the programmer, and for convenience it could be set up to take a continuation as an argument -- either the rest of the handler, or else the "return from handler" continuation that exits the handler, so that the programmer could perform some error handling after the stack is unwound (e.g. the clean-up code), although this could potentially be accomplished using restarts (but that might be less "pretty").
Perhaps continuations should be suggested for C++14; it is an almost logical followup to the introduction to closures in C++11.
Keep in mind that the "modern C++" mentioned involves the use of things like RAII, smart pointers, the STL and Boost, rvalue references, move constructors, and the buffer overflow production offered by the major C++ compilers.
The use of pointers and direct pointer manipulation, for instance, can often be completely avoided with ease these days. That cuts out a large number of potential problems.
It's a much different situation than it was in the past.
Smart pointers, Boost, rvalue references and move constructors is exactly the things that obscure the hell out of the code, frequently making it incomprehensible to anyone but the author. Crashing on a dereferenced NULL pointer is peanuts compared to clusterf#ck of logic that stems from a mess in the dev's head. C++ is less safe because its adepts, on average, are sloppier and messier than those who favor other languages (C included).
You're right that people tend to overestimate the difficulty of debugging crashes and memory leaks. A lot of C++ code I've seen has tried to move heaven and earth to ensure that mistakes are impossible, at the cost of introducing a lot of rather expensive (time to write, time to execute, time to maintain) infrastructure - or baggage, if you're unkind - that also make things unclear in the debugger.
This isn't quite a waste of time, but it's been my own experience that it's cheaper to just make things as straightforward as you can, then simply ask everybody to try not to make mistakes. Often you can build in systems that make it easy to diagnose problems when they occur, or just rely on people's debugging skills. Some programmers consider this a total cop-out - they're right, I should be ashamed - but I've found the overall TCO to be lower.
You might have to find this out the hard way to believe it.
The worst thing is that some people think the alternative should be C ;)
Completely agree. Pointer manipulation is a sign you should grab a coffee and chat it over with a friend. Sometimes necessary, of course, but that is the beauty of C++.
"The use of pointers and direct pointer manipulation, for instance, can often be completely avoided with ease these days"
Really? I am pretty sure there are a large number of STL implementations that give you a pointer when you ask for a vector iterator. I suppose you could avoid iterating on vectors, but you would also have to avoid operator[] (which by default performs no bounds checking, so the pointer arithmetic is just as unsafe as it would have been with a primitive array type); you'd be stuck with at(), which few compilers will optimize even if it could be shown that the index will never be out of bounds.
"That cuts out a large number of potential problems"
Yet the language encourages programmers to do things the unsafe way. It is a lot less effort to use a low-level pointer type, a fixed-width integer type, etc. than to use smart pointers, STL containers, iterators, Boost (which is not even standard), and so forth. Just as an example, a C++ programmer must first choose between multiple smart pointer types (where should you break your pointer cycle with a weak_ptr?), whereas a primitive pointer is easily accessible and requires no decision-making (this was the pattern with casting: C-style casts are much more convenient than static_cast, dynamic_cast, or const_cast, and so you saw C-style casts being used all over the place). While a Java or Python programmer on a tight deadline might produce slow code, a C++ programmer will produce unsafe code that is far more likely to have bugs (even if the high-level design is bug-free).
> Do they not know that basically every important piece of software today is still written in those languages?
A blatant overstatement. There's lots of important work done for the CLR (MS, many others), Java (Sun, Google, Amazon, bits of Facebook, IBM, etc, etc) or in Python (Dropbox) or in Cobol (banks...sigh). Yeah, there's plenty of important C/Cpp code but it's probably in a minority of "important", even if important just means "commercial" or "popular".
I disagree. Most operating systems, desktop applications, web browsers, and fundamental Internet services (web servers, BIND, MTAs, etc) are written in C/C++. Dropbox is not in that league of important.
So you're arguing with me over what "important" means. If "important" means nearer the bottom of the stack, then yeah, you win. I doubt many people see it that way though.
If by "bottom of the stack" you mean "the code that the top of the stack depends on in order to run at all," then I would say that he is correct in what "important" means.
It seems to me that people seem to magic away these things... The only type of context they can place programming/using languages is within in a web application and anything under that is irrelevant.
No, but presuming that the best language was used to implement these runtimes, these languages would be worse off it it weren't for C/C++. And the runtimes qualify as important software in their own right, simply because they inherit from the importance of all the software that runs on them written in other languages.
> these languages would be worse off it it weren't for C/C++.
Definitely.
> And the runtimes qualify as important software in their own right, simply because they inherit from the importance of all the software that runs on them written in other languages.
Also agreed.
The only thing I'm trying to disprove is the thesis that "basically" no important software is written in languages other than C/Cpp.
To add to what codex said, it's impossible to get really proficient at either Python, Ruby, or any other interpreted language without having some sort of understanding of the underlying platform.
"Why do people act surprised that C and C++ are still as viable as ever, and still very widely used?"
Surprised? No, I'm terrified. It is scary to think that critical systems are written in poorly defined languages that encourage programmers to create buggy, unreliable software.
"Do they not know that basically every important piece of software today is still written in those languages?"
That is a problem that needs to be (slowly) corrected. We can start by not writing tomorrow's important software in those languages.
But C/C++ are the building blocks for all these other languages, your argument seems to suggest that it's impossible to write critical systems in these languages.. but what would the alternative be? Surely if this is the case we can't use Linux as it's written in C, or we can't use Java because it's C++ based...
"C/C++ are the building blocks for all these other languages"
Not necessarily. The SPARK compiler was originally written in Ada. Python is moving towards a bootstrapped model with PyPy. Lisp compilers have been written in Lisp. There was a Perl 6 interpreter written in Haskell. The HaXe compiler is written in OCaml:
"Surely if this is the case we can't use Linux as it's written in C"
Would you trust your life to the Linux kernel? Would you be comfortable with that? I still see the occasional kernel panic; how would you like a kernel panic that sent your car swerving into a tree?
"we can't use Java because it's C++ based..."
...or we bootstrap our compilers and programming systems, and let C++ fade away (as it should). What stops us now is the volume of code, but we could go a long way by just not writing more C++ code, so that the problem does not continue to expand.
I still disagree that software, critical or not, written in these languages is a bad thing and although some languages are moving away from a C/C++ compiler their are still traits carried on through from previous versions to each new version [1, interesting read on Thompson and gcc].
The Linux kernel is used by the USA's DoD, it's also used by many stock exchanges, colliders, mobiles, set top boxes, routers/switches, the list is endless. I would much prefer to put my life in the hands of the Linux Kernel than Darwin or Windows' and to some degree we probably do daily.
Out of interest, can you elaborate on what exactly it is with C++ that you think makes it so scary that can't be avoided by adopting certain techniques (RAII for example)?
"Out of interest, can you elaborate on what exactly it is with C++ that you think makes it so scary that can't be avoided by adopting certain techniques (RAII for example)?"
The two biggest issues are (1) lots of undefined behavior and lots of complicated semantics and (2) the difficulties involved with handling errors.
The undefined behavior is something that Stroustrup justified by pointing to all the undefined behavior in C. This is a poor response in my view, since there are undefined behaviors in C++ that could have been defined without breaking any compatibility with C; for example, the order in which function arguments are evaluated. There are also things that were left unspecified for no apparent reason, such as a non-void function where some control paths have no return statement (which can be disastrous in C++ and can make debugging a giant pain). When the standard cannot even rule out programs that could never be compiled into something sensible, the standard is deficient (especially when most compilers can issue a warning over this sort of things -- so why not just define it as a compile-time error?).
C++ also forces programmers to deal with complicated semantics as the default. It may not seem like a big deal, but it is easy for a programmer to forget that "1/3" is not "one third." It is also easy for a programmer to forget that "x+1" might actually have a value that is less than "x." The problem here is that C++ makes these things the default, citing "performance;" it would be far better if the default were arbitrary-width types, smart pointer/smart array types, and other high-level types, with low-level types like "32-bit integer" or "pointer to a character array" requiring an explicit cast by the programmer. Critical systems have failed because of obscure "corner case" semantics like integer overflows in the past; these also create vulnerabilities in software.
Error handling is a complete mess in C++. Exceptions are OK, but they cannot be thrown from destructors, they should not be thrown from constructors, and there is no way to say, "retry the operation that threw the exception" -- exceptions destroy the entire stack, and so the client code is responsible for knowing how to retry things. If you are in the middle of writing a record to a thumb drive and that thumb drive is removed, what do you do? You cannot simply prompt the user to reinsert the drive, and then have the "OK" button cause the rest of the record to be written (unless the exception is thrown again because the thumb drive is not there), unless you want your IO library to open dialog windows (and so much for encapsulation). That basically makes exceptions no better than checking return values, other than that exceptions are a little faster and cannot be ignored.
The situation with exceptions is so bad that even the C++ standard library requires some errors to just be ignored. The standard IO classes, for example, are required to silently fail if there is an error closing an underlying file when the object is being destroyed. This forces programs to explicitly invoke a member function to close the stream before destroying the object -- so what is the point of having a destructor at all? Destructors have no good way to signal errors: they cannot throw exceptions, they cannot return an error code, and the order in which destructors are called is undefined so they cannot set a global error flag -- so you can only put things in destructors that either never create errors or which can only create errors that are safe to ignore (can you think of any such errors?).
C++11 could have fixed this error handling problem: they could have required that the stack only be unwound after the catch block ends, which would have both solved the destructor exception problem and opened the possibility of Lisp-style "restarts." Unfortunately, the standards committee did nothing of the sort, and instead defined the default behavior of destructor exceptions to be "abort the program." Here I was, thinking that "abort" was something that should only be called under the gravest of circumstances, yet a C++ program might abort because it was unable to write to a log file. Typically, one will hear the argument that exceptions should only be called in "exceptional" situations, which we are meant to think only means situations that are grave enough that "abort" can be called -- yet the standard defines exceptions for errors that can be corrected and for which a program exit is not strictly necessary, like an attempt to write to a full disk (and lest you think that this is no big deal, I have seen people lose over a 24 hours' worth of work because a program exited when the disk was full -- and you definitely would not want a 911 dispatch system to shut down just because it could not write to a log file).
You point to the use of the Linux kernel in critical telecom and financial systems as if it were evidence that the Linux kernel can be relied upon. I see the Linux kernel panic periodically. I doubt you would be happy if your 911 call were dropped because some telecom equipment was suffering a kernel panic. You see Linux in these places because technically better systems are too expensive and there are too few people who know how to configure and use those systems.
>> Has this realization somehow been lost over time with the rise of languages like Java, PHP, JavaScript and Ruby, even though the major implementations of those languages are themselves built using C and/or C++?
This feels like an oversimplification. There are surely very good reasons to choose [insert language other than C] for production / prototyping / development purposes. The fact that [language] was built in and / or relies on C libraries is at that point not central to the discussion.
> Use Autotools, C’s de facto cross-platform package manager.
Seeing the word "autotools" sends familiar, but scary shiver down my spine. After some years with maven and CPAN, I don't ever want to go back to autotools, no thanks.
It's also hard to think of anything less new school than freaking Autotools. There's a good kind of old school (C, UNIX, Audrey Hepburn) but Autotools isn't it.
FWIW, some of the advice in the book is pretty bad. The most unbelievable is to ignore memory leaks under 100Kb, but there are a lot of other examples.
The first half of the book covers Git, autotools, Makefiles, and random topics tangentially related to C. If you're unfamiliar with those, it's an okay introduction to them, but there's some dubious advice in those sections, too.
These days I won't even consider a language for serious work unless it has a reasonable set of built-in datastructures and algorithms. I'm just not interested in shopping around for a basic vector or hash or tree container or for a compatible set of elementary searching and sorting algorithms.
I know you can find implementations of all of these in C but this is the kind of thing that really needs to be built in to any modern language. C is fine for writing device drivers etc where you can't really assume anything about your environment but for anything else it seems too bike-sheddish.
It seems this term has evolved beyond any recognition to me. In what way is choosing C over another language that may provide more built in "arguing over something trivial/tangential"?
I love C for its simple syntax and "charm" (can a programming language be charming?) While the book does look fascinating, I can't help but wonder if it would be easier to go with languages like "Go" which was supposedly designed with 21st century considerations in mind?
Yeah, I had the same thought. In fact, in the interview, the author actually picks up a copy of K&R C and looks through the index for "threading" and finds nothing, whereas Go was designed with concurrency in mind.
On the other hand, as the author said, C is over 40 years old. Many consider that a weakness, but he considers it a strength, given the large number of C libraries available, whereas you don't have the same number of libraries with native Go bindings. Ultimately, of course, it comes down to using the right tool for the job. That may be C, that may be Go, that may be a higher-level language like Python, Ruby, etc.
I've heard that doing that with Wayland's libraries is a royal pain because Wayland makes extensive use of function pointer based callbacks. So Go calling C calling Go. I'm not sure to what extent this sort of thing is common though, I haven't seen it often myself.
I bought a copy after it was discussed here a while back. Overall it's a very good book and is worth purchasing IMHO. There are some notable holes and assumptions but it's probably the best attempt for a number of years.
I wrote kernel-level code for quite a few years. There, barebones C makes a lot of sense. (I can make a case for a subset of C++ that avoids many C++ issues, including use of RAII and similar things that get you into trouble in environments where /nothing/ is free, and you have to think carefully about the teardown strategy of every single allocation).
Don't confuse "easy" with "simple". That's my latest mantra.
Do elaborate. I come from the same background and I've been contemplating forking C for several years now. What you said sounds all too familiar to my own reasoning, so I'd be very curious to know what you have in mind. In broad strokes.
Take a look at the C++ coding standards adopted for high-reliability areas. For example, the coding standards for the Joint Strike Fighter: http://www.stroustrup.com/JSF-AV-rules.pdf
Those were mostly good (a little bit of crazy, but not much).
The Google guidelines are okay for non-kernel work. I've worked on three teams that have independently converged on very similar guidelines (at Apple, a start-up, and a group in Microsoft). I find that convergence quite interesting.
I've taken to telling others that when I say 'easy', I really mean straight forward. Often I'll think of a solution that's "easy", yet it may take a good two weeks to get done.
I love programming in C, however I'm getting the impression that right now almost every open job position is for java/.net/web developers. Besides embedded, where are C programmers needed?
C is a great language. I first learned programming with C back in college. After a few years off, I decided to pick up programming again but there aren't many resources for learning C. Most of the tutorials out there are for Ruby and Python. I have been learning Ruby for a while now and I'm now starting to go back to using C.
I think if there were more educational websites teaching C, more people would use it. I think new programmers will appreciate coding a lot more if they learned C instead of using Ruby.
I'll give one for old-school: heap fragmentation. You can't fragment a stack, block allocators have predetermined fragmentation behavior, heap is "who knows" It is possible to malloc() and free() for a very small net-total, and yet be no longer able to allocate a contiguous region that is a significant fraction of the heap.
With a modern VM system, it's not quite as big a deal since your heap is effectively infinite for many applications. It is still an issue on embedded systems, and was a problem back in the day too.
Question from a newbie C developer: How can you avoid using malloc? Are you finding some clever way to do everything on the stack? Or are you perhaps allocating a large block of contiguous memory and slicing it up on your own?
I've heard of both of those approaches and they sound pretty hard, so I'd be interested to hear your thoughts/be pointed to good references on the topic. Thanks in advance.
A lot of the time, the nature of the data and its lifetime makes specialized allocators easy to use. Not infrequently they can actually be more convenient than malloc/free when you have a bunch of allocations (think nodes in a parse tree) with the same lifetime, so their storage can be released as a single operation. For that example, you can get the same usability with a more traditional malloc/free interface using something like hmalloc, but if the parse tree nodes all occupy the same contiguous block, you get it automatically. And of course it's a lot faster to release a single block (if you release the storage back to the OS, the page manager will still have some per-page work for a virtually contiguous block of memory).
Basically, once you have a nailed down system design, it's usually not any significant extra work to use custom allocators. Where it can suck away your productivity is during iterative design and development where you're still figuring out the different kinds of data, what subsystems exist and who owns what when. But in that scenario, garbage collection is even easier than malloc/free, so it's not a mortal sin to use Bochs or just leak memory (heresy!) if you can get away with it--only temporarily, of course, while you figure out the design.
There's a large block of memory that's easy to slice up on your own. It's called the BSS section, which is where global and static variables without an initial value are stored. BSS gets zeroed before main() is called.
This isn't possible in all cases, but for embedded software especially it's often possible to create all data structures as global or static variables. The compiler takes care of alignment concerns for you, and you're guaranteed not to have heap issues because you never use the heap. Of course, you can run out of the objects you pre-allocated in BSS, but that's an easier problem to diagnose and fix.
1. Most of the compilers restrict the size of Stack to be much less than the size of heap.
2. If you use the space of this stack for dynamic memory allocation as well, you will be limiting your capacity to write recursive functions.
3. it will depend on the kind of system you are developing. If you are developing some tool to analyze millions of tweets, you have to use "heap" there is no alternate. If you are developing an embedded application " say loading your list of conatcts for sending the message yes there you can get away with allocating the memory on stack itself. But then there most of the phones do not allow you to make a list of more than 25-30 people.
So it depends on the application one is willing to develop and K&R well recognized it in early 70s itself that is why they introduced alloca in addition to malloc , calloc
I never said you should put everything on the C stack. I also never said you couldn't use dynamic memory. You can have a custom stack or block allocator with memory coming from sbrk() mmap() or preallocated in the bss.
There is quite simply a vast array of libraries out there that every developer has access to. As Ben points out at the 10:50 mark, looking at GitHub we can find something on the order of 150,000 C projects.
That's great, unless your employer/client/etc. is allergic to open source. I really can't consider something part of the language unless it's in a standard library that comes out of the box with the major compilers.
Most anything that you might want will be BSD-style licensed without the attribution clause. If your employer is allergic to that... then your employer is an idiot. GPL'd, even LGPL'd, libraries are fairly few and far between.
So learning C is on my list of things to do in 2013.
I'm pretty experienced as a programmer, but mostly higher level languages than C.
Would this book be a good place to start?
What other recommendations could experienced C programmers offer.
I'm not simply asking about the language, but best practices for memory management and toolsand above all, practical advice.
IMHO you're not an experienced programmer, but there will always be someone below you on the stack, so I should shut my loud mouth before someone puts my words in it.
To your request, this is not a good place to start. By the table of contents, it appears to assume a working knowledge of C.
Here are some ideas you can use to design your own "teach me C" course.
Read K&R, and write some simple toys. Write your own "cp". Handle some errors, add some options. Try out fread(3)/fwrite(3), then try read(2)/write(2).
Then write "mv" and do the same. Now make it fast within a filesystem. Or work across filesystems (depending on your first implementation).
Then write an encryption or compression program.
Now write a simple library, make a red/black tree. Store ints first, but then later, store generic objects (design your API carefully!). Make two iterators, one using a callback and one using a next() function. Decide which you like better.
About now you may start to find your development environment lacking, do you use Make? Autotools? Something else? Go learn one and update your old projects to use it. While you're at it, write some tests for them. Did you handle command line options for your version of "cp"? Did you use getopt_long(3)? Might be a good idea. Oh by the way, does it support "-R"? Hmm...and your tree isn't reentrant either, better go learn about pthread locks.
After a while, you should start learning some other libraries (because you want to do real work): curl, libxml, sqlite (hey these are in that book!). Study their code, their build systems, their test harnesses (oh man, especially sqlite's) and incorporate the ideas you like into your own work. Use them to build something cool, maybe a web crawler or whatever.
Do you like music? Learn an audio library and make something. From here, you should direct yourself to what you like building, because you probably have enough tools to do the learning on your own.
The comp.lang.c FAQ was very helpful when I taught myself C using K&R. It answered most of the questions that I had that I couldn't answer with K&R alone. I ended up reading the FAQ straight through because it was so helpful.
What prompted you to write this? I'm not insulted. I am curious why you would leap to that conclusion.
Is it based on how I worded my question? Or the fact I haven't "learned" or used c? Do you think I don't understand or haven't used pointers? Or memory management issues?
It's that you haven't used C. Learning C teaches you something about programming that other languages can't. It teaches you ways of thinking about problems that are useful even when you aren't using C.
This is true of most languages, and probably most true of Lisps and MLs. If you've never learned a Lisp, I also would consider you "not an experienced programmer". Not "an inexperienced programmer", and you may be "an experienced <ruby/python/garfunkel> programmer", but I think you are "not an experienced programmer". You should also use Perl at some point, for example. By the way, it's awesome that you want to learn more, that's a very good sign. (Again, this is all very heavy IMHO ;-)
In C's case, some of the things you learn are pointers, memory management, portability, simplicity, filesystems, build systems, etc. These are usually things that are handled by other languages, or that other languages discourage you from thinking about. Other languages will have made choices for you about how to hide these details, and it's often useful to have made your own choices about such problems at least a few times in the past. Otherwise you may end up blinded by the choices made by others.
"It teaches you ways of thinking about problems that are useful even when you aren't using C."
That's a strange thing to say considering that, as a C programmer, you're much more concerned with implementation issues rather than expressing high level ideas. In general, when I'm given a problem, I care about the organization of my data. I'm not concerned with memory management.
That's a bold and narrow claim to make. How do you know with what I'm concerned? It sounds like you consider the organization of your data a loftier thing to worry about than memory management. They don't sound that different to my ears, though your view of the world sounds fairly more rude. I hope you never need to be concerned with memory management, for your and your users' sakes.
If you want to play with straw men, go elsewhere. All you're doing here is wasting everyone's time, including your own.
I have a strongly negative reaction to straw man arguments.
Maybe I can be more helpful:
Being a programmer is like being a novelist. You need to be able to construct a plot, but you also need a powerful vocabulary. I'm saying C helps develop your computer vocabulary, and you're saying the programming analog of "but it doesn't tell you how to invent interesting characters!" Totally different parts of a useful experience.
No, that does not enlighten. I've written OSs, kernels, embedded systems - in C++. Because its a better language for those applications. The arguments in that link are spurious - just sound bytes to make the author feel better.
In no case does C++ require any more runtime than C. You can use little or no runtime at all in either language, with minimal effort.
Easier to write low-level in C? This is unsupportable. C++ can write anything you can write in C (its a superset), so anything possible in C is possible in C++, but now you have more leverage.
That's what even I am saying : It is about personal choice. If you like C++ use C++ nobody is stopping you. But that doesn't mean all other languages are bad or are pathetic.
"Enlightening" was for realizing that debate can never end but realization can come that both languages are good in their own space. None is like superior than other at least in case of C and C++.
Not ambitious !!!
If it makes you feel happier , I will say , Linux, Gcc , valgrind .... all were NOT ambitious projects. But everybody knows what the FACT is !!
I design ambitious systems for a living. Quarter to half million lines of C++. Not possible to have made in C, in anything like the same time frame.
So yes, 1000 monkeys on 1000 typewriters can make something large in C. But one guy (or a small team), properly trained and experienced, can move mountains with a high-leverage language.
"Not possible to have made in C." -- Is it really true ?? I don't need the answer. You need to answer yourself.
"one guy (or a small team), properly trained and experienced, can move mountains with a high-leverage language." -- But if guy is not properly trained he can create a mess which is hell to clean also.
"1000 monkeys on 1000 typewriters can make something large in C" -- Every time you are using an STL container, that container has already been written by someone else. Every template you use, compiler generates the actual code for it. Every time you include a library , it was written in actual C/C++ code by someone else. Use g++ -E option to compare the "pre processed" code and compare the things.
Yeah - stl. That's the issue isn't it? Its a mess to use stil embedded or for OS work - so I never, ever do.
Libraries are used by app guys -rarely by OS or embedded - unless they are specific to the hardware etc.
I hear the excuse "you can hurt yourself in C++ if not trained". That's true, but not helpful. You can build a house with hand tools too but nobody does any more - they use power tools and professional carpenters, electricians etc.
There is a saying "At times a needle can do what a sword cann't" . Same holds true in case of programming languages !!
There was no excuse. There was a point made - "If one likes C++ use C++ nobody is stopping him/her. But that doesn't mean all other languages are bad or are pathetic."
I wonder if C++ is so good and STL is such a godly thing then why did we need to go to Objective-C. Why do iOS applications need that and why didn't C++ and STL did the magic. The answer is every tool / technology has its application. Loading a heavy container ( vector/ list) doesn't make sense when all you need to implement is a skip list.
Nice to know that you write million lines of code.
I worked for one founder once. He earned $137 million by selling his products. He had worked on similar products for 15 years in his previous companies, primarily in C++. But when he started on his own - he chose C. He chose C for its own qualities and I am sure he would have missed some good qualities of C++ but then that is a compromise that come with early decision.
No; but surely you're not unkindly construing I wrote too many lines? I assert that in C it would have been many times the size, and probably never worked at all.
I've reduced 10,000 lines of C code to a 12-line template. Surely that is a good thing? This is not typical but a good example of high-leverage tools.
As with any new language, once you're comfortable with a small core of the language, start working on your own problems. Start with little made up problems, and move on to useful tools as soon as possible. You don't have to be a master to write your own tools, and you won't become a master until after you've written your own tools.
Aside from the ability to share code, it also makes sure that we are not using anymore memory than we really need. Avoiding unnecessary GCs is an important concern in writing performant Android Apps. Having the core C layer only hold on to as little memory as possible is a huge win.
On a less objective note: When I left undergrad I thought I'd never write C again, but it was such a pleasure working with it again. Knowing (almost) exactly what operations your code is doing is quite liberating after working with so many other higher level languages and frameworks.