Author here. I'm positively surprised, constructive discussion on internet!
There's been some discussion about using Rust.
Rust is an interesting language, and has definitely potential substituting C and C++ in some domains. The main reason I'm not so interested in using it is for game development, is because it's more complicated than C (like C++), and the complications are a bit off from what I'd want (like in C++). Just a quick googling reveals that rust mangles names by default, and doesn't have reflection, so I'd probably be in for a lot of negative surprises. Add that with being indifferent about the absolute security, and soon my code is mostly inside unsafe blocks so that I don't have to spend time convincing the compiler my pointers are safe. Maybe. I haven't done much programming in Rust. There are some nice convenience features though, when comparing to C, but they seem to be rather minor things.
It comes down to choosing between two non-ideal solutions. I value simplicity more than some, so favoring the simpler one feels more natural to me. Sure, when you need the safety then Rust seems like a decent choice.
Only responding to one element of your comment -- but I think that Rust's safety features are over-marketed in my experience. The safety Rust offers is only part of a package that provides high-performance high-level abstractions over functionality that is normally very bit-twiddly in C. So far, my favorite thing about Rust is not its safety, but how easy it is to write good performance software using abstractions that are as convenient as Python (or another high-level language). The safety is just icing on the cake at that point (for me at least).
EDIT: Re: pointers -- Rust is a lot easier to write if you just ignore pointers. Really. Either pass small structs by value or pass references (either immutable or mutable), and let the compiler handle the actual pointer manipulation.
> I think that Rust's safety features are over-marketed in my experience.
I agree with you in the sense that overfocusing on safety has led to things like Andrei's "bulging muscle" criticism, implying that Rust is especially lacking in high-level and metaprogramming features. Compared to C++ and D, it certainly does have a lower feature count in that area (although I'd argue that it's counterbalanced by the fact that Rust holds the line on strong typing for generics and hygiene for macros, whereas C++ and D don't). However, compared to most other languages Rust has very feature-rich generic programming and metaprogramming support. We're not talking "does not support generics" here; we're talking about the difference between the 95% and the 99% metaprogramming use cases. In the overall landscape of industry languages, even just having associated types makes Rust's generics system one of the most sophisticated out there. The only popular languages I can think of with more powerful generics are C++, D, Scala, and Haskell, and the first two sacrifice strong typing.
The main reason for the focus on safety is that it, combined with the lack of GC, is what actually makes Rust unique. Very few industry languages have any features unique to them, but the borrow checker is one such feature. C++ and Swift may get something like it in the future, but Rust has it now, and the entire language and ecosystem is designed around it (and the borrow checker is especially difficult to bolt on to an existing language, because it relies on strong aliasing guarantees). So it's natural that most people have focused on the zero-overhead safety when describing Rust--it's the most salient answer to the question "what can I do with Rust that I can't do with language X?"
Yes. For me, as someone who came to Rust from a non-C/++ background, the appeal of Rust to me is that I get powerful features like an ML-ish type system, iterators, functional programming toys, and more, while still getting C++ or even C-sized performant executables.
Rust to me feels like someone sat down to make a systems language that was actually aware of the last 40 years of programming language development. I don't have to sacrifice expressiveness for performance anymore.
The borrowing and reference safety mechanics are just an extra layer of worry-removal icing on what's already a pretty appealing cake.
If it wasn't for the uptake of UNIX, we probably would never had to discuss about memory corruption in 2016, other than writing stuff like device drivers.
There is now a whole generation that thinks C was the very first systems programming language, the compilers were as good on day 1 as they are today and it has became some kind of sacred cow.
I don't think Rust's safety guarantees are over-marketed because they aren't important, but because many of the benefits of the language/stdlib/tooling are available to people who don't frequently have to deal with memory corruption. It's not that the safety isn't valuable (I just about pulled all my hair our implementing custom data structures in C++ last time I did it), but that most developers think it's not as useful as other marketable elements of the language.
FYI, if you care as much about compilation speed as your post suggests, today's Rust is right out: rustc is considerably slower than C++ compilers for typical workloads in large part because it doesn't have proper incremental rebuilds. That could change in a matter of months, which I'm looking forward to, but it still probably won't be at C level.* And without optimizations, rustc produces considerably worse object code than even C++.
* I could be pleasantly surprised, though. In theory, for incremental builds, based on the general design planned [1], rustc should be able to do better even than C in a lot of cases because only changed functions need to be recompiled rather than entire files; but I'm a pessimist and expect there will be something to make it slow in practice. I could be wrong though.
I was positively surprised by the constructiveness of the article itself. You'd usually get Linus' style rant about how C++ sucks so bad and C is the epitome of simplicity and design. I wholeheartedly agree with you that both languages are lacking, and I'm also waiting until Rust or another language grows mature enough to replace both of them in most cases.
But I think most of the main points you describe as impossible in C++ are actually completely possible. It means moving away from the 90's paradigm of using C++ to implement deep class hierarchies with design pattern, but after all C++ is a multi-paradigm language. If you can do data-oriented design in C, you could most certainly do it in C++, and the abstractions C++ provides actually make it easier.
In essence, I would separate game objects to a logic instance and state objects (pure structs) and use smart pointers with a generation-counter to point the logic instance, which in turn would have smart a pointer to the state struct. The smart pointer would overload the dereference operator and transparently update the logic instance to get the new vtable if needed.
This decoupling of state and logic could do many nice things, such as serializing the entire game state in a very clean way, and pure state structs would not be harder to parse than C structs (they would essentially be C structs), so you can still have your memory editor.
The main difference between C and C++ here would be the cost abstractions, both cognitive and performance-wise. I have to admit I've never ran into standard abstractions significantly slowing optimized debug code, except for standard library containers. I'm not a game programmer though, so YMMV. I get the cognitive cost argument, but for me the cognitive cost of C (namely having boilerplate noise scattered all over hiding the interesting code and having to be super-extra-careful with memory management) is higher than than the cognitive cost of internalizing all the layers of abstraction in C++.
If you're programming a network game, I still think you'd do your users a better service if you don't dismiss safety offhand. There's a whole class of memory-safety bugs which would never surface during normal play, but could still be exploited with specially crafted packets. Of course, C++ wouldn't give you perfect protection either.
I agree with your depiction of C++. In fact, one of the benefits of C++ is that it is convenient to use stack allocation rather than heap allocation. This can really simplify memory management. The trick is that you have to choose a different idiom for your idiomatic C++ ;-). The trick is to always know who owns the memory and to never allocate memory in libraries. You use dependency injection, always pass by reference and always clean up in your destructors. If you are forced to allocate something in a library, you build a wrapper to deallocate it (or sometimes copy it when you receive the memory so that you can own it).
It takes some experience to build applications this way, but it is well worth building that experience. C++ is still my language of choice for anything that requires fine control of memory and attention to performance. Rust looks like a very possible successor, but I agree that the lack of incremental compiling makes it not useful for large projects at the moment.
> I agree that the lack of incremental compiling makes it not useful for large projects at the moment.
Whoa there. Let's not exaggerate the scope of the problem. Large projects in Rust usually consist of many crates (in Servo, 150+). Rust absolutely has incremental compilation on the level of the crate, so the compilation times are similar to what you see in C++ with per-directory unity builds. This is the same story as Go, for example, and people don't say Go isn't suitable for large projects; incremental compilation isn't even on Go's roadmap.
> one of the benefits of C++ is that it is convenient to use stack allocation rather than heap allocation.
Not quite. With its destructors (and the RAII that come with them), C++ makes it easy to follow a scoped discipline. Allocations are easily scoped, but they don't necessarily happen on the stack.
Strings and vectors for instance, are typically allocated on the heap, assuming you're using the default allocator (which most people do anyway). While this is convenient and make things simpler, that's quite the runtime overhead, and causes latency problems similar to those of a genuine garbage collector (malloc() and free() aren't exactly constant time).
> never allocate memory in libraries
Now that can reduce the number of allocations in a program. Note that this also rules out most of the STL: its containers call their (possibly user defined) allocator themselves.
It is quite possible that I'm out of date (haven't used C++ in anger for the better part of a decade), but local variable allocations have always historically been on the stack. Just don't use new. Arrays will be allocated on the stack as long as they are fixed length.
> local variable allocations have always historically been on the stack
And they still are, thank goodness. My point was that as soon as you call a constructor, all bets are off: for instance, merely declaring an std::string allocates on the heap, local variable or no. But it seems you already knew that.
I think it is important to distinguish scoped discipline from stack allocation. The former is a way to program. The later is an implementation detail (modulo performance). Your wording was a tiny bit sloppy, so I jumped at it.
Good that I succeeded at avoiding ranting. I find it really hard to keep from throwing absolutes.
Almost everything is possible with C++, that's true. Some things require a lot of engineering though. Like full program reflection, fast debug builds, and fast (below 5s) builds.
Funny that you mention the decoupling of state and logic, because I see that as a non-issue in C. State is a struct, logic is a function. There would need to be a really convincing argument to make me wrap the hundreds of game object components I'll have to two objects each.
C has some differing cognitive cost, which C++ doesn't have, that's for sure. I think we have just differing personal preferences on which is worse :P (There are real-world situations where I'd choose C++ though)
The network point is valid. Implementing it was mostly a nice learning opportunity. I haven't yet decided if I want to keep it or not. It currently lacks safety and proper compression, so if I get serious about it I'll probably make all network data go through validation functions, which helps with both compression and handling hostile data.
You've said most of your build time was linking, so wouldn't you get the same build times with C? Templates would slow your build time a lot though, so they come with a price. C++ would never beat C on build times, but C never came close to Pascal, back in the day, and I'm pretty sure C# still beats the guts out of C with hands tied. :)
> Funny that you mention the decoupling of state and logic, because I see that as a non-issue in C.
I beg to differ. Decoupling logic and raw data is not an issue in C, but state is not the same as raw data. You may have the same state data duplicated in memory (e.g. an index), and there's also the issue of representation of state which is not portable (e.g. pointers, particular data structure). You also have pointers to functions (logic) in your data if I understood you right.
With C++ (or any sufficiently high level language) you can easily abstract away the difference between state and raw data. If you've got function pointers or pointers to instances of logic objects (which is a must if you don't want to litter your code with long switch clauses), you could save them as type IDs or something akin to that.
You might be programming something very different, but so far most good C code I've seen was thoroughly objected-oriented. Not in the sense it religiously followed the 3 pillars of OOP, but in the sense it coupled structs with several functions that have been meant to use on solely with these structs and sometimes employed polymorphism by embedding function pointers in these structs.
All of that is much easier to do with C++ classes, and if this is all you wanted (and you don't think you need proper separation of data from bone fide state), you can have a code that would refresh the vtable pointer in memory, and would sure be more straightforward than manually updating function pointers all over the place.
> It currently lacks safety and proper compression, so if I get serious about it I'll probably make all network data go through validation functions, which helps with both compression and handling hostile data.
Validation is nice, but you can't foresee every weird way your data could be shaped. It's very easy to miss one small corner. That's why you see several security engineers going around these threads dissing C. :)
> You've said most of your build time was linking, so wouldn't you get the same build times with C?
C++ generally incurs a price on link times too. For example C++'s stronger type system creates more burden on the name mangling, like there is actually a difference between an int and long, even if they are the same size on the system. Then if you have code that doesn't make this distinction, overloaded functions need to be generated which creates more generate code bloat. Then if you throw templates into the mix, C++ generates a ton of symbols for every permutation. This is why C++ object files are usually much larger than C object files and why the linking process is so much slower.
This is a nice talk from Jonathan Blow (Braid's author) about the necessity of a new language for game programming as an alternative to C++, and why new languages like Go or Rust are not apt to the task:
He barely mentions Rust and only to dismiss it as a "big idea language" because it's memory-safe. Not a very interesting critique, IMO- he doesn't talk at all about any of the other features of Rust that improve on C and C++.
I doubt you'd have the sort of trouble with pointers and unsafe blocks that you suggest. Most typical pointer patterns don't even require lifetime annotations, let alone bypassing the borrow checker.
I also think the "convenience features" are a pretty big deal. Generics with traits are a pretty big value add for their complexity, and to me affine types take away a lot of the complexity of C.
I can sense the pitchforks coming out already! A quick disclaimer first of all: I LOVE C. Love love love it. I'm a reverser so it's my native tongue. I get as much joy from the actual creative process of coding and playing with pointers as I do from having a reliable working finished project.
No offense, but I think you suffer from the same problem. You're in love with coding, so being forced to micro-manage things the 'C' way isn't a problem for you.
But at some point in time you have to look back and justify the hours spent. You -will- be more productive by having the majority of your code in C++ 11. Things -will- be easier to maintain for yourself and others.
I have written absolutely brilliant C code that I'm very proud of, but if it's been a while since I looked at the project I have to sit down for an hour and re-familiarize myself with how things work....and I'm the one who wrote it.
Based on what you've written I can tell that you don't have the experience to justify writing the bulk of your code in C. Guys who inline ASM all day long can have a hard time doing that. I know it's harsh, but it's meant as helpful criticism.
One other thing: You don't have to pick just C or just C++. If you have engines in your project that are clearly better in C and you enjoy doing it, more power to you....I encourage you to do that. However, it's very easy to get hung up on C and lost in your own C world of imagined optimizations.
My advice: Start off with solid C++, move parts to C later on when testing justifies it and your amount of free time justifies it.
I don't think you're addressing the concerns which made him choose C. In his case: debugging is complex, compilation is slow, name mangling is unreliable, global state is abound.
He makes no mention of inline assembler, nor does he encourage premature optimization.
He talks instead about a concrete problem he had: the typical performance of operations was not good enough, thus no single optimization would help much. He also mentions that the abstractions encouraged by C++ do not help you build data-oriented applications (e.g. games); using them properly actually hurts performance.
meh - imo, even if C++ is a ridiculously bloated beast, it has some things that are indispensable for game programming that C doesn't. vector math without operator overloading is gross. being able to have generic containers with templates is really useful, even if templates are gross. encapsulating functionality in classes is always useful, and you can get a lot of use out of inheritance without overusing it.
these things are really codebase readability and maintainability concerns, and to me they're orthogonal to "data-oriented programming", things like optimizing for cache locality, SoA instead of AoS, not calling virtual methods in warm or hot paths, throwing away malloc and using purpose-built custom allocators, SIMD/AVXify all the things, etc. etc.
it's a lot harder to handle the insane, byzantine complexity of a game engine in something as pared-down as C.
with that said, i don't like C++ and i definitely don't think it's a beautiful language by any means.
jonathan blow's new language, jai, is very interesting. purpose-built for game programming. check it out if you don't know about it yet. it's looking really promising for all types of high-performance, game-like projects.
> vector math without operator overloading is gross.
Many C compilers can use normal infix operators with SIMD vectors and it will have better performance than C++ operator overloading (especially in debug builds). All you need is a typedef.
typedef float vec4f __attribute__ ((vector_size (16)));
vec4f a = { 1, 2, 3, 4 }, b = { 5, 6, 7, 8 };
vec4f c = a * (a + b);
The typedef is slightly different for Clang, but that's just one line. This code should work with GCC, Clang and the Intel C compiler. MSVC doesn't do this, but I don't write code for MSVC any more because I can compile compatible object files with Clang.
In my experiments, I've noticed that you get the best performance by passing vectors and matrices by value, not by pointer or reference. It also makes the API nice, because you can return matrix values, e.g. `mat4x4f mvp = matrix_product(projection, matrix_product(model, view))`. In some nasty cases you might need to add force_inline attribute, but most of the time the compiler will inline the functions anyway.
In C++ with operator overloading it's easy to do stupid things like in-place addition (e.g. operator+= for a vec4f) or start transposing matrices in place. This will make the compiler emit memory load/store instructions when you'd want to have these values in registers.
> being able to have generic containers with templates
Generic containers are somewhat of an issue with C, but most of the std::containers are not suitable for some game development tasks (this is why projects like eastl exists). I prefer using intrusive containers in C, similar to how the Linux kernel deals with linked lists and red-black trees. Even the C++ standard library (the GNU one) is internally implemented this way, the "generic container" is just a thin type safety shim on top to avoid template bloat.
> In C++ with operator overloading it's easy to do stupid things like in-place addition (e.g. operator+= for a vec4f) or start transposing matrices in place. This will make the compiler emit memory load/store instructions when you'd want to have these values in registers.
That sounds like a failure of inlining. If you're at the point where a single stack spill makes a difference, you won't want to pay the cost of the calling convention spills either. And if you are inlining, then SROA and mem2reg will easily remove those load/store instructions. Modern compiler optimizations make what you describe not a problem anymore.
> Modern compiler optimizations make what you describe not a problem anymore.
Yes, it should get inlined, but I've seen this fail in a recent-ish GCC (4.6 to 4.8 or so). And there's still the issue of debug builds being slower even if the compiler works perfectly in optimized builds.
Operator overloading will only work with SIMD if you write your vector class using intrinsics or SIMD extensions anyway. If you're doing scalar loads and stores, you can't rely on getting SIMD instructions in the output.
> Yes, it should get inlined, but I've seen this fail in a recent-ish GCC (4.6 to 4.8 or so)
Seems like a pretty bad GCC bug then, one that should be fixed upstream.
I really dislike it when code avoids functions because of fear that they won't be inlined (or to try to work around compiler bugs to that effect), because doing this dramatically reduces code maintainability and safety in exchange for very little benefit, given the inline hint keyword and __attribute__((always_inline)).
GCC's inlining has been a bit brittle, especially when dealing with vector arguments, and also depending on the ABI (4x double vectors without AVX, etc). It's much better in GCC 5.x now.
I generally use always_inline for vector arithmetic functions, just to be sure. You never want to have a function call to do just a few SIMD instructions.
Incidentally, a tip: C11 has basic generic/overloading support based on macros. So even if you can't get "a + b * c + d", you can at least get "add(a, mul(b, c), d)". The implementation is ugly, but if you only need it for one thing, it doesn't really matter.
1.) the comment about ASM was simply to illustrate that I doubt many highly competent coders would ever advocate doing a large project completely in C. bad example maybe, but that was my only point.
2.) complaints about debugging are invalid, sorry. we have amazing tools available these days and there's no excuse to suck at debugging.
3.) compilation is slow, he's right about that. #pragma hdrstop can help a bit there
4.) name mangling is a bitch, you can get around it by defining your exports, etc.
5.) everything else is just excuses for bad C++ code. it doesn't matter what the latest C++ style encourages you to do, what matters is writing code that works for your application.
I actually think it's awesome he's going the C route, it's just that he could probably do his project in C++ two times over in the same amount of time and I think most of his reasoning for choosing C is crap.
Didn't mean to come off like a dick so much, I wish him well, and I gotta admit alloca() is awesome
You speak about perfomance a lot, but do you actually have it as an important requirement, or is it just fun problem to tackle as programmer? In modern game development, it's usually the latter — most hobby game projects don't have art assets detailed enough to be slow on platforms where your end-users actually will play your game.
I make games in Unity/C#, and yes, of course it's slower than a custom C solution. But instead of spending time on writing my own containers and memory allocation, I spend time writing game logic. Instead of optimizing game to run from 100 to 150 FPS on my computer, I'd rather spend this productive time making 5 variants of the same game mechanic to find out which is more fun. Even if the game runs in just 30 FPS.
Of course, I sometimes try to research low-level stuff, write my own renderer, or do a small experimental project in pure C, but then my goal is to research, not to write a game. I can hardly imagine a project, where the goal would be to develop a game, and not to research, where using C and having to spend so much time on this matters would be a good trade-off compared to working on actual game mechanics in higher-level language.
Unity is written in C. That the game logic is written in C# is hardly relevant as you explain. In fact this guy might still embed C# at some point if he wishes for his logic to be less painful to write.
The author states at the beginning that his goal is writing an engine and only after that write a game. You are right on all other points though, most people working on game engines simply never get to the making a game part. There's simply too much to do.
I spent a couple years' working together with a friend every now and then on a game+game engine in C#. We did get past the game engine phase because it was kept simple (ECS + box2d and simple monogame 3d renderer) but we eventually gave up because even though game logic was progressing swiftly assets were just a pain in the ass.
Most low level engineers (as we are/were) underestimate the work it requires to get a proper asset pipeline set up. You're never going to build a 3d game if you can't load cheap 3rd party assets with their animations and materials. And then the level designer which is basically just another game in your game.
We went with UE4 as a change of pace and man is it librating not to have to worry about the limitations of your homebrew engine anymore. In UE4 literally anything is possible (you have the full source after all) and its always less work.
I guess, my question to the author should've been: why are you trying to write a game engine? It's a completely different task than writing a good game, and mostly conflicts with it.
Two reasons: Creative interests which make traditional game engines a hard fit. I could make compromises to the design, but as I also have technical interest in making game engines, it's a double win.
The need for high performance meets with my creative interests. (But I also have technical interest in making the engine). I also value the ability of being able to implement extraordinary features to the engine in a whim, which is hard with large general use engines.
I agree that there is a class of games that can be easily implemented with a prebuilt engine, but this is not one of them.
> I agree that there is a class of games that can be easily implemented with a prebuilt engine, but this is not one of them.
This "class of games" includes more or less everything that a single developer without a very costly art department can produce. If your game can not be implemented with a prebuilt engine, it's really something extraordinary.
I think it's pretty clear from the article that using an off-the-shelf engine is not an option for the author, because it doesn't satisfy his interest in building game engines ;-)
That said, I agree with your previous comment about performance. Rarely, if ever, should using one programming language over another limit the performance of any game so much that it would severely affect what you can and can not do gameplay- or feature wise. Unless you use some highly dynamic scripting language that fundamentally doesn't fit the problem domain, of course (I wouldn't write a 3D engine in Python or Ruby, for example).
I can somewhat understand some of the other arguments of choosing C over C++ (primarily build times), but for the most part the problems the author has with C++ appear to be philosophical. You can have almost everything you want in C++, if you're prepared to use it differently depending on your requirements. In terms of performance, it really is true that optimization only pays off for small sections of your code, so it doesn't make any sense to switch languages for that. You can most definitely make an extremely efficient data-oriented entity-component system in C++ without going having to jump through any hoops, for example.
I don't personally think it matters all that much even on AAA games. Unreal has GC. Take a look at the games made in Unity. Plenty of AAA or close to AAA games. Cities Skylines, Firewatch, Ori and the Blind Forest, Pollen, Endless Legend.
I've written several game engines in the past. I hope to never do it again.
>Take a look at the games made in Unity. Plenty of AAA or close to AAA games. Cities Skylines, Firewatch, Ori and the Blind Forest, Pollen, Endless Legend.
I think we may have differing standards. I've only played one of those, Ori, and it had massive problems with long frames. Like, often, and sometimes even half a second long pauses during normal gameplay. I don't know if Unity is to blame for that, but that game is not a good argument in favor of it at least.
> I have to choose between writing duplicated code, writing a code generator, or tedious macro stuff for generic code.
Code-generation all the way. Use an expressive language like python/lua/tcl/lisp/scheme to work on a higher layer than C.
Two-language programming (one GC scripting, the other C) beats the heck out of C++, in terms of best of both worlds: expressivity in higher layer, performance in lower layer.
C++ has a dirty secret no one likes to talk about. Stroustrup himself was a two-language programmer. C++ was C with classes where the classes were built with unhygienic C macros. When he decided to show his work to "average joe programmers" of the world, he turned it into one language (it turned out he was not a very good language designer so the world has to live with it).
If you're working with C++ you are Stroustrup's average-joe customer.
If you're working with two-language programming you're Stroustrup himself (even better cz you're using a higher language way better than the C macro system)!
Other than that. You want superpowers? give your text-editor your C parser (something similar to this [1]). Structured-editing can do amazing productivity gains in C. This is something I'm still looking into (using some vi/vi-clone, or emacs/emacs-clone, and pycparser) but I very excited about the possibilities.
Where's your proof that a mix of e.g Python and C is a particularly productive way of working? This is not the first time you make this unsubstantiated claim.
The opposite of what you say is easy to argue for: two languages, twice the headaches.
* first of all, you need to know TWO different languages. C is completely different from a language like Ruby or Lisp.
* even if you manage to do that, you still have two build systems, two things to package and deploy, you need two libraries of everything (e.g: unit testing)
* you need to constantly pass information between the two worlds, which can be both performance costly and challenging from a design perspective.
* normally it's not as easy as "rewriting the slow parts in C". What if your problem is overall memory usage? What if there is no one function to rewrite?
* if you make a mistake in C (super easy), your whole app crashes.
I don't think this is the magical solution you're presenting it to be at all.
Here are some examples (based on my limited experience) that hardly anyone could challenge:
- Unix philosophy and the shell utilities (bash + C)
- Emacs (lisp + C)
- Python scientific stack (Python + C/FORTRAN) (Heck even Google had to offer tensorflow in two interfaces, Python and C++. If they were going to offer a Python interface anyway, they could make their life 100x easier by implementing the backend in C instead of C++).
- git (bash + C)
As for 2 languages, twice the headaches, not if your 1 language is a multi-paradigm monster called C++. I'm not really sure if this used to be the mainstream advice when C++ was created but these days it's common knowledge that you should learn at least 3 to 5 different languages in order to improve your programming languages. If you can learn 3 to 5 languages, you can definitely work in 2 languages for your projects.
And by 'average joe' I don't mean any disrespect to C++ programmers. (I'm just commenting on the thinking behind the creation of C++). Quite the contrary, I have no doubt that it takes a lot more skills and hard work to program in C++ idiomatically than in C. Unfortunately, IMO, a lot of that skill and hard work is spent managing with the "accidental complexity" of the language, and not the "inherent complexity" of the project.
I generally support this approach. I have liked working with C and chibi-scheme. Though I think depending on the size of your game logic, it may make little sense to even bother with a scripting language.
There are real costs to building the abstractions needed in order to make a useful scripting environment.
> 5. realize that I shouldn't be using some parts of C++ (exceptions, stdlib)
> 6. start to ponder if I really need even the good parts of C++
This reads like wisdom and maturity to me; unfortunate, but not surprising, that people are quick to judge.
Last game studio I worked at, we wouldn't have given up C++, but there were frequent conversations about its pitfalls and complexity, and quite a few rules and conventions recommending against, out outright prohibiting some C++ practices, exceptions, for example.
Before that, the last film studio I worked at, a bad experience with C++ led them to chuck it (before I got there) and go object oriented C. I learned their style of C and quite liked it. I missed operators and templates a bit, and it felt a little verbose, but I came to really appreciate the simplicity and explicitness.
Sounds like it wasn't an easy choice, but on a solo project that large you have to prioritize what makes you feel the most productive in the long term, and there are always tradeoffs.
As a long-time C++ coder, I need to add that project-specific prohibition of some C++ practices is a pretty normal, even recommended thing to do. Introducing such prohibitions is not a fault of C++.
Also, I observed that "bad experience with C++" is often related to someone using, while not completely understood, some more advanced C++ paradigm.
> project-specific prohibition of some C++ practices is a pretty normal, even recommended thing to do.
So there's a need to subset C++. I wonder. If we took the union of the most common subsets, would we have all of C++, or only parts of it? Which parts of C++ are unsuitable for any project?
If there is any, it is totally the fault of C++ if we have to subset it. Historical reasons yada yada, I don't care: a language you have to subset is still worse than the subset itself.
JavaScript held in high esteem? You can't be serious? That language is way too flawed: equality operator that's not transitive, wicked scoping rules, semicolon that you can omit but really shouldn't… And you're telling me this poor Self copycat is held in high esteem? How? Because it took over the web? That's not enough.
As for C++… That language is just too big for its own good. It tries to be too many things to too many people, and as a result is almost never the best choice. (Discounting code already written and available expertise of course. I'm judging the language, not the ecosystem.)
The idea of C++ is a good one, under a couple conditions: first, it must be tuned to a domain or a way of programming. Second, it must not try to be goddamn source compatible with C. If we're writing a language, we might as well correct C's mistakes along the way.
I wouldn't say "wrong", this is a subjective issue, and some people like them. It's just that exceptions can be dangerous, in a sense, because a very innocuous looking line of code or function call can end up executing something you don't expect, or your function can bail out before you expect. It's similar to how inheritance and virtual functions hide (encapsulate) what's going on, but also obscure execution too. Usually the encapsulation is good, but when you need to know exactly what's going on, they can throw wrenches at you. It can be really hard to figure out all the possible errors that could be thrown, or code paths that could be taken in any given case.
I don't know the specific reasons std lib was mentioned, that's a big topic. But generally in the context of a game engine, you need all custom memory management and I/O, so there isn't much place for std lib, depending on which std lib pieces and which platforms and which C++ you're talking about.
For exceptions:
- A bit unpredictable (hard to optimize)
- Introduce a lot of exit points which are hard to find
- They deviate from the "pay what you use," since they generate some extra code
- They can introduce some nasty performance penalties (I experienced this myself a few yeas back, the compilers might be smarter these days, maybe)
About the stdlib: There's a "myth" that often times the stdlib/stl is slow and doesn't do what you expect (unless you've thoroughly read the documentation, which everyone should do anyway). I can't say I was personally affected by this since I also use C++ for nothing too serious. I said "myth" because the argument against the slowness it that this is an outdated notion.
Most C++ compilers these days should have zero cost exception handling, meaning that the non-exceptional path should be free. This works by making the compiler add exception handler data to the executable file, ie. telling where the catch() blocks are. When an exception is thrown, the stack trace is analyzed and the appropriate catch handler is found by searching for the return addresses in the exception handler data.
This can make C++ with exceptions faster than C with error checks because there's no branches to check for every error condition. Using __builtin_expect for marking the error conditions as unlikely may mitigate this issue.
The cost is that you have to ensure that all your code is exception safe unless you are extremely careful to ensure that exceptions are only thrown in places you've expected and coded for.
Exception safe code is both difficult to write correctly and often has a significant run time cost.
Agreed on both accounts but the grandparent comment was talking about runtime performance penalty, which is gone (for the non-exceptional path) in modern implementations.
But yes: exception safety, even basic safety (don't leak resources or crash) is difficult, let alone strong exception safety (no side effects if exception occurs).
A good article, but what disturbs me is that the author, while obviously aware of relatively good C++ and programming practices, has somehow arrived in a place where he discounts essentially all of C++s core competencies... like claiming RAII is "far from optimal", that exception safety is "a constant mental overhead", and that copy and move semantics involve writing "a lot of code". These are fairly outrageous claims, and it all smells of total burnout, and losing sight of the forest for the trees, to me.
I feel if the author refocused on his C++ basics instead of bemoaning lots of peripheral crap (like OOP, and the unsuitability of the STL for game dev) he'd reconsider some joy in C++ again.
I think one of the main reasons why he did what he did is his motivation, or lack of it while working with C++. Whatever rational arguments would point out to C++ will be essentially meaningless if he is not happy writing with it. That's why discipline is so important in any work. Motivation can get you just so far. It is highly volatile and unreliable. I think it's safe to assume that the author soon will get tired of C and move to languages like python and write an article of how optimizations aren't that important compared to productivity and how critical parts can be written in C, etc. Great article for his perspective and some pros/cons of C/C++, but take it with a grain of salt.
Absolutely you should take the text with a grain of salt. The reasons I chose C are very personal.
I can entertain the idea that my desire for simplicity is just a fad. I suspect that I wouldn't move to something like Python (I use it for other purposes though), because I don't like optimizing. I want a mindset which helps producing code with reasonable productivity, and of which performance I don't need to worry much. But yeah, it's possible that I'll change my mind and write an article about how I was so wrong before :P
I see unwillingness to adopt the modern and relatively extreme ways to use contemporary C++ well, only the old and simple painful ways to use "C with classes" with serious issues.
For example, the paragraph about loading and saving game state postulates "using the ideas of polymorphism and encapsulation", automatically throwing a lot of pointers and vtables in the way of reading and writing a binary blob like in the C engine.
A serious C++ engine with a "data oriented" design would have the same arrays of primitive types and dumb structs as its C counterpart, merely dressed as std::vector or std::array.
> A serious C++ engine with a "data oriented" design would have the same arrays of primitive types and dumb structs as its C counterpart, merely dressed as std::vector or std::array
Yeah, I agree. Including the "full C++" to the comparison was just trying to avoid having to argue about which is the "right" subset of C++ for engines, which is somewhat besides the point. The main problems I have with modern, data-oriented C++ are mostly compile times, slow debug builds and non-trivial reflection.
> like claiming RAII is "far from optimal", that exception safety is "a constant mental overhead", and that copy and move semantics involve writing "a lot of code". These are fairly outrageous claims
I agree with the OP on all three counts. I don't think RAII is very nice (python-esque with-statement would be nicer IMO), exception safety is really a mental overhead unless you restrict exceptions to a minimum and the copy-assign-move semantics do add a bit of work to every class you introduce.
These are not outrageous claims, they're rather valid opinions. Feel free to disagree but I'm siding with OP on this one.
Overall I dislike C++'s value based semantics, not because it's inherently bad, but it's just so different to any other (reference based) languages that only a small minority of programmers know how to work with them. Attaching complex semantics to types (overloading, templates) and virtual functions/inheritance (when overused) makes reading code much harder and often you need to resort to stepping in the debugger to find out where a function call actually leads.
Well written C++ can be really nice at best, but unfortunately most C++ code bases out there seem to be either a bastard mix of C, Java and C++ styles or over-the-board boost-ey template mess. Neither of these extremes hits the sweet spot.
> I don't think RAII is very nice (python-esque with-statement would be nicer IMO)
You can implement Pythons 'with' statement in C++ with almost the same syntax and exactly the same terseness. This shouldn't be surprising, since C++'s value-semantic constructor-destructor mechanism is simply more general and fundamental.
> copy-assign-move semantics do add a bit of work to every class you introduce.
Only if you fill your objects with multiple dumb C pointers to other objects ad nauseam. If you embrace value semantics and smart pointers you almost never have to write any of the "rule of 5", and when you do it's straightforward. Simple rule: a class should never contain handles or pointers to more than a single resource. Composition handles the rest.
When I mentioned Python's with statement, that's really all I'd want. Something neater than C's "goto error" error handling, but not much more than that. I think there's constructs like this in some languages.
> Simple rule: a class should never contain handles or pointers to more than a single resource. Composition handles the rest.
The issue with this is that in the real world, you have to deal with 3rd party libraries and frameworks or even the OS syscall interface which don't follow this guideline. So you'll end up writing wrappers for all your "foreign" objects from any other libraries you use. This is a lot of work and creates impedance mismatch problems when there's no simple clearly established concept of ownership (e.g. the wrapped objects are already reference counted) or the assumptions of the library aren't friendly to C++ style semantics.
If you live in a "modern C++ only" bubble, this isn't an issue.
As for external libraries...sometimes it requires a bit of ingenuity to find that perfect 'wrap everything!' vs 'write C' balance... but it's usually not so bad if you approach the problem judiciously. unique_ptr and shared_ptr's capabilities are often underestimated in this regard for instance. Both can be used with external reference counting, the former being the most efficient.
I'm happy when I'm writing C. It's only when I look back on a week of coding and realise how little functionality I implemented that I realise it's a bad idea.
Interesting, since I feel just the opposite. In C++, most of the time, I can write the precise intent of what the code should do, and I can also choose the level of abstractness, without compromising on performance!
I think the main trick with C++ is to learn the best practices first, and only use those one thoroughly understood.
I agree wholeheartedly with most of this post, but ended up coming to a different conclusion to the author. I don't trust myself to write good, maintable and safe C, so instead my semi-toy game engine has been (and is being) authored in Nim instead, which I've found quite interesting. Writing safe wrappers over C-libraries is a challenge in and of itself!
Isn't Nim (and almost all of its standard library) garbage collected by default? I've been very intrigued by Nim but it seems that while it has nice native/C code generation, it opts into my least favorite element of "high-level" languages.
It is, yes, however it's a bit different to most other GC-based languages in that it's soft-realtime tracing GC. In addition, you can manage your memory completely manually if you like, and in-fact thats how you interface with C libraries, which I can say from experience works much nicer than I honestly expected going in to it.
> When using OOP, like the idiomatic C++ coder does
Huh? Sure, you can go OO architecture astronaut in C++, but if that isn't a good model for your problem, don't do it. I see more of a focus on generic programming, anymore, in C++.
Excellent article. I'm not sure I agree with all the points, but none of them are stupid or necessarily wrong. It's a very constructive argument.
I've been thinking for a while that the problem with C++ is that 100 decisions have been made both in the library & language and in "best practice" that all individually are good, but the combined effect has been unfortunate.
I'm not quite getting from the article why you can't write in the C "subset" of C++ and add in a few of the more helpful parts of C++ though.
> I'm not quite getting from the article why you can't write in the C "subset" of C++ and add in a few of the more helpful parts of C++ though.
I think this is because of C++ name mangling, which prevents the creation of a quick and dirty reflection system. And maybe compilation times, which must be higher since the compiler has to deal with all of C++, even if you don't.
Completely agree, this is why most game developers use only a subset of C++ as "C with classes", and at the end of that thought process some are going back to C completely (I haven't made the jump yet though, but have been pondering this for at least 2 years). What I still like about C++:
- user-provided code called when an "object" goes out of scope
- operator overloading sometimes makes sense
- simple template-meta-programming sometimes makes sense (but could be replaced with other sorts of code-generation)
3rd party libraries are usually a pain to integrate when they have a C++ interface, and usually simple when they are just a C header, and when they can simply be dropped in as source into the project.
A good middle-ground is probably to build the building blocks in C, this way they are way better reusable then writing C++ code, also across languages, and tie the low-level buildings blocks together in whatever language one likes (even interpreted languages).
In this day of mobile and web gaming, indie developers everywhere, and game engines like Unity and UE4 being so accessible, I think more game developers are writing games in Java, JavaScript, C#, and other garbage collected languages -- not to mention "visual programming" of Unity and UE4.
If you're writing a game engine in C/C++ today you're probably never going beyond the desktop Windows platform.
Performance will be back if VR gets adopted. At 90 fps, you have around 11 milliseconds to update your game state and render views for each eye. That is not much time. C#/unity and scripting languages will have a noticeable penalty compared to native C code. Bloated engines will likely suffer as well.
A lot of games run on Mono/.net Java etc. IIRC these languages have far more overhead. (I could be wrong.) So unless you suspect you need >9000 fps OR you are going to be doing a LOT of processing / complex game you might not need to worry about it ?
The JVM can run pure Java code to within 2× of native code (although it has a pretty weak vectorizer, so if you're getting vectorized speedups in native code, Java won't get you that). However, calling a native function from Java is a really high overhead call. Which means that the core engine likely can run at essentially the same speed, but the graphics is going to be slower.
Both CLR and JVM can be fast, it's not that. It's slower than C++ still, but not by much. In some benchmarks a Jit:ed lang can actually be faster.
However, the real reason Java and C# apps are most often much slower than native apps is because idiomatic C# and Java is a lot more work to execute. If you want to write really high-performance Java or C# code, you have to make it allocation free on hot paths, you have to be very careful with what types are on the stack and which are on the heap, and you have to pay a lot of attention to data layout (In java that doesn't have structs especially, making lots and lots of types as SoA instead of collection of objects for example).
The bottom line is: while you can make C# and Java code run really fast, it's very hard and non-idiomatic C# and Java code you end up with. It's so hard, that it's actually no easier than writing e.g. Rust or C++ code to do the same thing! And that is exactly the point where C# and Java no longer makes sense: if you can't use the idiomatic and ergonomic way of writing it, it has lost its value.
Note that this is the default, standardized behavior. If you're willing to use Oracle (and I think by extension OpenJDK)-specific features then there are some functions with "unsafe" in the name that allows you to hand over memory directly to native code, instead of copying it over. The default behaviour is to copy all memory passed in and out of the JVM, so as to avoid native+java+GC messing with the same memory area (potentially at the same time).
As soon as I saw this headline, I knew there would be dozens of comments.
First of all, they are both great languages, no matter what anyone tries to tell you.
I went through this same dilemma and, while I have great respect for C, I soon realized that my situation would be more productive with C++ and its tools immediately available. For one thing, a lot of important libraries that are common in games and graphics have C++ interfaces, so if you use C you are asking of yourself to exclude some great tools out there, or, go through significant steps to wrap some of them (which would still involve some C++ anyway).
I would look at one if those new type and memory safe languages which are more pleasant to work with and usually faster than C/C++, esp. with concurrency. Pony, felix, nim, Julia, Ocaml, elixir, rust.
This reminded me of Banished [1], a game developed by a single developer who also wrote his own engine (though in C++). His development blog [2] goes into quite a lot of technical detail.
Don't focus too much on technology, disconnected from what people (including you) might want to actually use. Otherwise you will not get the real inspirations -- even purely technical inspirations -- that come from something that you intend to use.
I’m sure there’s been Mac games written in Objective-C, but there are certainly no cross-platform games written in it because vital bits of Objective-C aren’t actually part of Objective-C; instead, they’re part of the Foundation framework in OS X and iOS. You can compile Objective-C on whatever platform you like through GCC but you’re going to have to write your own memory management, allocation, etc among other things in order to do so. That said, you might be able to write a cross-platform game through open source Foundation/AppKit implementations like GNUStep and Cocotron, but keep in mind that they implement a version of Foundation that’s several versions behind now.
The next closest thing is probably Swift alongside Apple’s FOSS reimplementation of Foundation for Swift, though I don’t know how well Swift works for development with OpenGL and such.
Yes, I used Obj-C to port a game from iOS to Android. Mac and Windows versions coming soon, also in Obj-C! I use Clang, GNUstep and my own rewrite of UIKit.
Possibly this is just Stockholm syndrome, but the more I use Obj-C, the more I like it. It uses reference counting, so memory overhead is low and consistent, and running time is consistent. It's not fast, true, but in my case most of the running time is spent in OpenGL anyway. Plain C is always there at your fingertips for crucial inner loops. Newer features like properties and for-each loops make it fairly pleasant to use. The standard Foundation library is pretty well-designed.
The one thing it's possibly missing is generics (Apple added that feature recently but I haven't tried it yet). Without generics, it feels a bit like a reference-counted Go -- that slight scripting language feel but with native speed.
The one big downside... On non-Apple platforms, the tooling is very patchy, and new features can take a while to arrive. If Apple is really serious about open-sourcing Swift, that could help a lot. I imagine there'll be more interest in new shiny Swift than crufty old Obj-C, but Obj-C is still a nice little language.
Modern Smalltalk-style languages can be fast. Look at Javascript! And in Obj-C, you can drop down to plain C when you need the speed.
It's possible to get your types mixed up when using non-generic collections, and that will throw an exception at runtime. No different from Go or Python.
In terms of memory safety, it's a step up from C if you use ARC, as all your object lifetimes are managed automatically. You can still get memory corruption if you're careless with arrays, but it's not really an issue with objects.
Well, I do know non-iOS devs whose dislike of C++ have caused them to speak in favor of ObjC, but I suppose they would simply use C with structs in that case.
Every one of his complaints has a good solution, albeit some not widely known. Except compile time, single source file or otherwise - the compiler is using lots of smarts and lots of library headers on your behalf.
That said, author has done a good job explaining his thought process in taking the rewarding-at-every-step path, given he's not a C++-wrangling junkie.
Are there any other sources of information on "modern C"?
I'm thinking particualrly on how to handle memory allocations well and safely, and how to write clean containers and algorithms. It's been a long time since I wrote pure C and I'd like to try it again!
Wow, I had to change the brightness and contrast setting on my monitor just to read this article. What is the strategy behind this darkish font against black background? I wish more websites geared towards a reading audience actually provided a nice reading experience.
"Safety-enforcing languages like Rust are going somewhat off already by definition, as they're focusing primarily on safety, which is not the focus for most game code."
I don't really agree with this assessment (I think rust's "safety guarantees" really amount to amazing compile time programmer assistance), but that's their stated reason.
Are you saying those are good things or bad things in the context of games? As I commented elsewhere, it's not that I don't think that Rust's safety isn't important, but it's not the only worthwhile element of the language and I think it's good to recognize that its safety guarantees can eliminate many "non-safety" bugs in many programs.
I am saying they are bad things, then again game developers aren't known for worrying 1s about safety anyway, specially if that implies 1ms less, even if that isn't an issue for the game being developed.
Which is surprising, since games are very much human faced, and those crashes is what's considered reducing the value quite a bit. So using something like Rust should be a big boon for game developers. The only issue is that some things will still remain in the unsafe area (i.e. interfacing with graphics backend and such), but at least the code that developers control can be safer.
> New and hip GC'd languages don't care about the priorities efficient game engine code has. Safety-enforcing languages like Rust are going somewhat off already by definition, as they're focusing primarily on safety, which is not the focus for most game code.
However, I disagree with this. Rust does enforce safety, but once you have a feel for how the borrow checker works (which isn't hard at all coming from C or C++), it fades into the background. The real benefits of Rust for games are the build system, the module system, affine/linear types, and all the various syntactic features like pattern matching and expression-based control structures that make it way less crufty to read and write than C or C++.
Rust's biggest weakness for game dev (again) isn't the enforced safety, but the still-growing ecosystem. There's a few really good low-level libraries (Glium, for example), but if you stray beyond them it turns into a nightmare of writing your own FFI definitions and unsafe glue code.
Right. The last ultra-performance-oriented work I did (in C) I'd have loved affine types and a borrow checker. Used right, these kinds of tools not only don't get in the way (as you note) but prove tremendously useful when you need to refactor.
The library thing is huge. If you're in C++ land you've got UE4 and a large number of open source graphics engines (Ogre3D), plus far more options in every category of engine (OpenAL, Box2D, and so on). If you're in .NET/Mono land you've got Unity and a plethora of engines built around Unity.
It's ridiculous how much work these engines will save you, once you know how to use them, but ecosystem is key because these engines are HUGE.
There's been some discussion about using Rust.
Rust is an interesting language, and has definitely potential substituting C and C++ in some domains. The main reason I'm not so interested in using it is for game development, is because it's more complicated than C (like C++), and the complications are a bit off from what I'd want (like in C++). Just a quick googling reveals that rust mangles names by default, and doesn't have reflection, so I'd probably be in for a lot of negative surprises. Add that with being indifferent about the absolute security, and soon my code is mostly inside unsafe blocks so that I don't have to spend time convincing the compiler my pointers are safe. Maybe. I haven't done much programming in Rust. There are some nice convenience features though, when comparing to C, but they seem to be rather minor things.
It comes down to choosing between two non-ideal solutions. I value simplicity more than some, so favoring the simpler one feels more natural to me. Sure, when you need the safety then Rust seems like a decent choice.