Hacker News new | past | comments | ask | show | jobs | submit login

So no, I wouldn't like to preemptively prevent bugs, at least for any substantial cost or risk. At least not in my line of work, which usually focuses on exploratory or risky new features where some tolerance for bugs is allowed. As 37signals says, you can't save time, you can only do less. Every effort you make in a code change is a sunk cost (and often leads to more sunk costs, if follow-on changes are necessary), so you should only do it if it gets you something.

FWIW, the tool is semantics-preserving, and I actually like it (and the general suite of clang refactoring tools) a lot. My point isn't for or against the tool, it's about the general philosophy of maintenance changes. You should use it if you have already decided you're going to switch to C++11, and value style consistency across your team. Or if you are actively having a problem with code maintenance and new features are becoming hard to add - this was the situation that Google was in when we developed the tool. Or because your developers will be happier if they get to use C++11 and don't have to do the conversion work themselves. You should not use it because you read about it on the Internet, or because it's the new hotness.




This is interesting because it raises the whole meta-question of -- could you have developed feature X faster if you'd already done refactor Y? In my experience the answer is usually "yes", but if you have a mostly-moribund code base already, then it's usually "no".

Hey, reasonable people can disagree! :)

EDIT: "You should not use it because you read about it on the Internet, or because it's the new hotness.". That's a bit of a cheap shot, at least when leveled at the posters in this thread, I think. Hopefully most professional developers are slightly more responsible than that. (Again, experience may differ, so...)


I find that if I look at features I write and ask "Could I have developed feature X faster if I had already done refactor Y?", then the answer is usually "yes" as well. The problem is that this isn't the question you're actually faced with when you make a decision (it is the question you're faced with when you're an engineer that actually has to write the code, which is why engineers and management usually differ on the worth of refactoring). It's "If I do refactor X, will it save time on the feature Y that I actually end up launching next?" And for that question, the answer is usually "no".

In other words, we suffer from hindsight bias. For any given feature, there is usually some combination of refactorings that will make it easier. The problem is that these refactorings are only evident in hindsight, once we've actually started to write the feature. If you speculatively do the refactoring, it surely will make some features easier to add, but those features are probably not the ones that you actually end up adding.

The way I solve this in my personal programming (where I am both engineer and manager, so my incentives are aligned) is to do any refactorings I wish I had immediately before writing the code. That way, I know exactly what I'm aiming for and exactly what will make it easier, and there's no guesswork involved. Over time, this also seems to produce optimally-compact code, albeit sometimes non-intuitive to someone newly introduced to the project.

Unfortunately, while this seems optimal from a project perspective, it runs counter to the incentives of both the engineers (who want to seem like really fast & reliable coders to management, and not say "This feature will be delayed because of decisions we previously made") and managers (who want to hear "This feature will be delivered tomorrow", and not delayed because of decisions previously made). Solving this incentive mismatch is an open problem; companies with technical management often do better at it but there's still a big information loss here.


Very informative comment, thanks.

I certainly agree that local optimization (as in your response to my "example") usually wins in entrenched situations, but I disagree that it's "hindsight bias". Hindsight bias usually applies when you're responding to a future situation based on previous (idealized!) experience, but if you're already in the middle of a project and deciding what to do, then you're not really in the same situation as if you're starting a new project based on previous experience. (That's a bit mangled, I hope my intention makes sense.)

I'll state plainly that I tentatively buy into the refactor-early-refactor-often mantra, assuming that the language can support that reasonably.

For me, I really think that every situation is "global" vs. "local" optimization thing, and I'll argue towards "global" whenever I can, even if it reduces productivity in the short term. Michael C. Feathers' "Working Effectively with Legacy Code" was very instructive in this regard.


The way you went meta in this comment: I wish I could find more kinds of comments like this on the internet. This I can refer to in 5 years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: