Is this true? It was never explained to me this way, and it doesn't make much sense. The increment will always be performed unless it can be optimized away. In places where the two are equivalent, using i++ might impose a space penalty if you have to store both values, which can be a significant consideration for large objects. I don't know what you mean by "change what i++ means" since you can't redefine ++ for any types for which it has a meaning in C. (Or maybe you can, but I wouldn't know because it would never done in sanely written code; the possibility definitely isn't taken into account in any C++ coding recommendations I've read.)
Mainly I was taught that situations that require i++ are less common, more subtle, and easier to get wrong, so programmers should make them stand out by using ++i everywhere else.
Mainly I was taught that situations that require i++ are less common, more subtle, and easier to get wrong, so programmers should make them stand out by using ++i everywhere else.