Generally a wonderful set of minimalistic rules, much could carry over beyond C.
Except for: "OMG-API-3: Unsigned integers are preferred over signed".. I feel they're on the wrong side of history with this one.
"Prefer unsigned" only works if you can do 99% of your codebase this way, which, besides LLVM, probably doesn't work for anyone. Having a codebase that is 99% signed is much more feasible. The worst is a codebase with plenty of both, which will be guaranteed endless subtle bugs and/or a ton of casts. That's what they'll end up with.
Even LLVM has all this comical code dealing with negative values stored in unsigneds.
The idea that you should use unsigned to indicate that a value can't be negative is also pretty arbitrary. Your integer type doesn't represent the valid range of values in almost all cases, enforcing it is an unrelated concern.
I can see where they're coming from, signed integers come with all sorts of caveats in C and C++ from overflow being undefined behaviour (yet modulo-math often makes sense when integers are used as array indices) to bit twiddling surprises. "Almost always unsigned" sounds like a good rule to me to avoid such pitfalls, especially when 'common math stuff' is usually done with floats or special fixed-point formats.
Overflow being UB is not something you run into easily with typical math, index and size uses (not as often as your run into unsigned issues, in my experience). Yes, bit-twiddling should be unsigned, but it is very easy to make this code isolated, and convert from signed values storing these bits, if necessary.
We actually had a debate about this. I was initially in favor of "signed as default", but I acquiesced.
In retrospect, I think I was wrong and that "unsigned as default" works better.
I think the domain is important here. We're building a game engine, so there's actually plenty of bit fiddling. We also make use of the "overflow wraparound" of unsigneds in a lot of places.
I think in our case having 99 % unsigned is more feasible than having 99 % signed. There are actually not many things that we would need to use a signed integer for.
Well, you have the facts to back it up, so indeed it can work for you. I'd say it requires quite some commitment to push it that far though, and I still would think it's not the right default for almost all teams. Impressive you made it work :)
I actually like the distinction between std::size_t/std::ptrdiff_t (and even better std::vector's ::size_type and difference_type ). It makes the intent very clear.
It helps that I get to compile all my code -Wconversion.
#pragma once
#ifdef __cpluspus // <-- should be __cplusplus
extern "C" {
#endif
#include "api_types.h"
// ...
#ifdef __cplusplus
}
#endif
Can't say I'm a fan of OMG-CODEORG-3, however, it sounds like compilation time is a key metric for them.. I prefer a John Lakos style "physical components" set up which emulates a type-as-module inclusion style. At least OMG-CODEORG-3 clearly states that include order becomes important as a result.
I wasn't sure about OMG-CODEORG-3 in the beginning either, but after using it for over a year and a half now, I'm strongly in favor.
The only situation where inclusion order matters is when there's (pseudo-) inheritance, and we don't use that a lot, so in practice it is not a big issue.
Actually, I've had MORE problems with inclusion order in previous projects that didn't use this rule. What would happen is that some header (included from some other header, included from some other header) would include <windows.h>. Then some other header (from some other header, etc) would include something that conflicts with the (many) #defines in <windows.h>.
Trying to sort out this mess was always a PITA. First you have to figure out where the include is coming from. Then you have to figure out how to fiddle with the include order and the defines to fix it. When using OMG-CODEORG-3, this is pretty simple, because all the includes happen in the .c file, so it is easy to rearrange them to fix include order problems. Not so easy when the includes are scattered all over multiple .h files.
Another big win with OMG-CODEORG-3 is that you see exactly what other pieces of code the .c file is dependent on, you don't need to follow multiple header chains to figure it out. You also only depend on the things you really need which is nice. In projects with liberal header inclusion, dependencies can grow as O(n^2) which increases complexity.
Agree, CODEORG-3 adds a bunch of pain. There's a reason other languages don't have headers but since C programmers have to live with them can't I just include the single relevant header and move on with writing my code.
Yes there is a shared cost to that (compile time) but '#pragma once' is well supported and futzing with header order is a non trivial time-sink too.
On the same lines the template 'cute tricks' are where you get your performance, stability and readability from C++. I definitely agree that you should drop into assembly to see what the compiler is doing with your code but that can and should apply to heavily templated code too.
Better yet, use std::chrono. Yes, it's C++. But this is an example of how properly applied bits from C++ can make things easier to reason about and type-safe, rather than "let's avoid C++ as much as possible".
No ambiguity for the programmer as to what the underlying units are, and no unnecessary int/float conversions. All the book-keeping and conversions are taken care of by the compiler with zero run-time size or perf overhead.
std::chrono is a terribly overengineered API even for the STL, and many game companies have banned parts or all of the STL for good reasons (usually not std::chrono related though).
Using an uint64_t (instead of uint32_t or double) to carry "opaque ticks", and a handful conversion function to convert to real-world time units is fine and just a few lines of code.
it would still be an offset much less than the loss of precision by storing int milliseconds.
There are also some techniques for dealing with the time inaccuracies, although I feel those aren't widely known.
Objects as structs with function pointers? 1990 is calling. I'm not a huge C++ fan, but trying to emulate C++ concepts in C is kind of lame at this late date.
Except for: "OMG-API-3: Unsigned integers are preferred over signed".. I feel they're on the wrong side of history with this one.
"Prefer unsigned" only works if you can do 99% of your codebase this way, which, besides LLVM, probably doesn't work for anyone. Having a codebase that is 99% signed is much more feasible. The worst is a codebase with plenty of both, which will be guaranteed endless subtle bugs and/or a ton of casts. That's what they'll end up with.
Apparently the C++ committee agrees that size_t being unsigned was a huge mistake (reference needed), and I would agree. Related discussion: https://github.com/ericniebler/stl2/issues/182 https://github.com/fish-shell/fish-shell/issues/3493 https://wesmckinney.com/blog/avoid-unsigned-integers/
Even LLVM has all this comical code dealing with negative values stored in unsigneds.
The idea that you should use unsigned to indicate that a value can't be negative is also pretty arbitrary. Your integer type doesn't represent the valid range of values in almost all cases, enforcing it is an unrelated concern.