You can add them when you're actually facing a bug that you don't know the cause of. If the override keyword helps you find the cause, you are in exactly the same position that you were in, with the same time expended, as if you had pre-emptively added it. If the bug never occurs or you don't need to add declarations to everything to find it, you are strictly better off than if you'd done so initially.
This illustrates a general principle in software maintenance: later. Push decisions off until they're actually causing problems, and then fix them as soon as it's apparent that they're a problem. Code that's removed because of changing requirements or never touched again because the feature is frozen doesn't need to be maintained. Even for code that does need to be maintained, you have more information about what the optimal architecture for the system is once you built out the system more.
(All this assumes consumer or non-critical enterprise grade software, where the cost of a bug is that you spend time finding & fixing it. If you're doing high-assurance software like avionics or medical devices, or anything where a bug is likely to cause major collateral damage, then by all means add every language feature designed to stop bugs ASAP.)
> You can add them when you're actually facing a bug that you don't know the cause of.
EDIT: Sorry, mostly rewrote this comment. Apologies if anyone responded during that time.
Sure, but then you're just reacting to bugs. Wouldn't you like to preemptively prevent bugs? I think this is why you might like to use a C++ compiler rather than an interpreter?
Just reacting is sort of like "what you don't know won't kill you". But the thing is that it might end up killing you (figuratively, not as in avionics). Personally, I think it's part of due diligence as a consultant. Anything this will reduce our chances of making an error before runtime is worth it -- up to a point. Does auto-adding "override" exceed the cutoff point? I don't think so.
Btw, how do you know that these potential problems aren't causing (small per day, but large over time) monetary damage? If you have end-to-end metrics, then you're probably fine, but it can be very hard to judge if bugs are actually causing you damage.
(I'm going on a huge assumption here, namely that this tool -- which I'm not familiar with -- is actually semantics-preserving -- which is an extremely hard problem in C++.)
So no, I wouldn't like to preemptively prevent bugs, at least for any substantial cost or risk. At least not in my line of work, which usually focuses on exploratory or risky new features where some tolerance for bugs is allowed. As 37signals says, you can't save time, you can only do less. Every effort you make in a code change is a sunk cost (and often leads to more sunk costs, if follow-on changes are necessary), so you should only do it if it gets you something.
FWIW, the tool is semantics-preserving, and I actually like it (and the general suite of clang refactoring tools) a lot. My point isn't for or against the tool, it's about the general philosophy of maintenance changes. You should use it if you have already decided you're going to switch to C++11, and value style consistency across your team. Or if you are actively having a problem with code maintenance and new features are becoming hard to add - this was the situation that Google was in when we developed the tool. Or because your developers will be happier if they get to use C++11 and don't have to do the conversion work themselves. You should not use it because you read about it on the Internet, or because it's the new hotness.
This is interesting because it raises the whole meta-question of -- could you have developed feature X faster if you'd already done refactor Y? In my experience the answer is usually "yes", but if you have a mostly-moribund code base already, then it's usually "no".
Hey, reasonable people can disagree! :)
EDIT: "You should not use it because you read about it on the Internet, or because it's the new hotness.". That's a bit of a cheap shot, at least when leveled at the posters in this thread, I think. Hopefully most professional developers are slightly more responsible than that. (Again, experience may differ, so...)
I find that if I look at features I write and ask "Could I have developed feature X faster if I had already done refactor Y?", then the answer is usually "yes" as well. The problem is that this isn't the question you're actually faced with when you make a decision (it is the question you're faced with when you're an engineer that actually has to write the code, which is why engineers and management usually differ on the worth of refactoring). It's "If I do refactor X, will it save time on the feature Y that I actually end up launching next?" And for that question, the answer is usually "no".
In other words, we suffer from hindsight bias. For any given feature, there is usually some combination of refactorings that will make it easier. The problem is that these refactorings are only evident in hindsight, once we've actually started to write the feature. If you speculatively do the refactoring, it surely will make some features easier to add, but those features are probably not the ones that you actually end up adding.
The way I solve this in my personal programming (where I am both engineer and manager, so my incentives are aligned) is to do any refactorings I wish I had immediately before writing the code. That way, I know exactly what I'm aiming for and exactly what will make it easier, and there's no guesswork involved. Over time, this also seems to produce optimally-compact code, albeit sometimes non-intuitive to someone newly introduced to the project.
Unfortunately, while this seems optimal from a project perspective, it runs counter to the incentives of both the engineers (who want to seem like really fast & reliable coders to management, and not say "This feature will be delayed because of decisions we previously made") and managers (who want to hear "This feature will be delivered tomorrow", and not delayed because of decisions previously made). Solving this incentive mismatch is an open problem; companies with technical management often do better at it but there's still a big information loss here.
I certainly agree that local optimization (as in your response to my "example") usually wins in entrenched situations, but I disagree that it's "hindsight bias". Hindsight bias usually applies when you're responding to a future situation based on previous (idealized!) experience, but if you're already in the middle of a project and deciding what to do, then you're not really in the same situation as if you're starting a new project based on previous experience. (That's a bit mangled, I hope my intention makes sense.)
I'll state plainly that I tentatively buy into the refactor-early-refactor-often mantra, assuming that the language can support that reasonably.
For me, I really think that every situation is "global" vs. "local" optimization thing, and I'll argue towards "global" whenever I can, even if it reduces productivity in the short term. Michael C. Feathers' "Working Effectively with Legacy Code" was very instructive in this regard.
This illustrates a general principle in software maintenance: later. Push decisions off until they're actually causing problems, and then fix them as soon as it's apparent that they're a problem. Code that's removed because of changing requirements or never touched again because the feature is frozen doesn't need to be maintained. Even for code that does need to be maintained, you have more information about what the optimal architecture for the system is once you built out the system more.
(All this assumes consumer or non-critical enterprise grade software, where the cost of a bug is that you spend time finding & fixing it. If you're doing high-assurance software like avionics or medical devices, or anything where a bug is likely to cause major collateral damage, then by all means add every language feature designed to stop bugs ASAP.)