It's good to see Ward Cunningham recognized for one of his many great contributions. I once heard a whole NPR piece about wikis and their history which didn't mention Ward as their inventor. It only occurred to me later that I should have written in.
The video of him talking about the debt metaphor is worth watching, especially the last minute where it gets subtle.
Technical debt is appropriate in prototype code, where you aim at getting something you can play with as a user; and also get a feel for its technical feasibility. This works fine.
Unfortunately, I ended up wanting to go back to the code to reuse/build on my approaches to the key technical problems... but the hacked mess was tortuous to read or understand (interest payments). So the dilemma is: how to write readable code, when it's only for a prototype and so not worth the effort?
My current tradeoff is to make the prototype code too simple, by cutting out tricky special cases, and by deleting approaches I coded and all my comments discussing them, assessing issues etc. That is, make the code "workmanlike". It might be stupid, incomplete, inefficient and incorrect - but in a crystal clear way.
I actually find this discipline incredibly difficult, and I often lose days at a time exploring alternative coding. This often pays off; but the biggest payoff comes from having a clear and concrete basis to build on. I've just finished the current prototype, and many solutions to technical issues have now clicked into place as obvious to me - because I can now see them. So that works well.
And I also have something - incomplete, buggy, inefficient etc - that I can play with as a user. Ugh. In other words: a crystal clear sense of what needs to be improved. Maybe..."auditable"?
I've thought a lot about why technical debt seems to be such a hard concept for non-engineers to grasp; certainly given how few business people or managers seem to understand it, it clearly must be difficult. I think it's because A) it only affects a truly long-lived, stateful process, like a software (or many other types of engineering process) where the output of one phase is the input to the next phase and B) there just aren't that many sorts of processes that occur in ordinary life.
Sure, in any field you can screw up in such a way that it hurts you in the future by damaging your reputation or annoying your customers, or make bad technical investments that turn out to cost you. But in engineering disciplines, doing your core activity (engineering) poorly can hurt your ability to keep doing your core activity (engineering) in the future.
If you're in sales and you cut corners with one client, it doesn't make your phone calls take longer. If you're in a restaurant and you cook things too fast or cut corners on ingredients, it doesn't make your future cooking take longer. If you're a doctor and you're too hasty and mis-diagnose a patient, it doesn't make your future surgeries and checkups take longer. But in software, if you're too hasty adding feature A, it does make your future features take longer to develop. And that's because in software, your future features are built on the previous ones, whereas with sales, cooking, or medicine the duration of any given prospect/customer/patient is relatively short-lived and finite.
That's my best theory as to why it's so hard for non-engineers to understand.
In a nutshell, could it be argued that kludges are fine as long as the programmer?
• knows the exact location of the hacks.
• Is aware of the repercussions and is assured that the system can get away with the kludges for the time being without any hindrance in the system’s effectiveness to get things done.
• Knows how to fix them if need be.
• Communicate the situation with others, including the fellow programmers and the managers.
The people who pay the debt back probably have to be the same ones who accumulated it in the first place, otherwise the learning process (for which the whole thing is a metaphor) is short-circuited.
Even after reading the article, I think that folks often don't understand that Technical Debt is not a "bad thing" but a thing that must be understood and managed.
When you understand this, it puts software engineering into a very clear perspective.
That's true, but how do you manage it? Especially when it's so difficult to measure in the first place.
One idea that popped into my head would be to keep a running tab of technical debt that's being accumulated in a project. It's tough though, because developers who are in "accumulating debt" mode tend to be the ones with the least amount of time to spend on upkeeping a document that tracks their debt.
I open a ticket saying things like "this needs to be documented, estimate is X hours to document fully; don't forget gotchas A,B, and C", or "fix this corner case that occurs X percent of the time", or "this will run slowly when traffic reaches X because of Y" and such, giving it a priority that indicates exactly what the technical debt would cost if it wasn't resolved (super low priority is really cheap, no big deal, extremely minor inconvenience). This is partially so I don't forget about it, but also to document that something isn't as clean as it could be. These are not hard and fast "measurement" numbers, but by looking at the ticketing system, you can get a general idea of how far things are behind.
It also gives you a list of alternative things to work on when you're getting burned out on the active tasks.
It's managed by the thought process of the programmers. A good programmer knows when and where the technical debt is and wants to pay it off. If the programmers don't understand this or don't care, there's not much that can be done. Teaching works, up to a point (but no further).
You can't manage this with documents for the same reason you can't manage thinking with documents.
If I were going to try anyway, and were working with a team of more than 2 or 3, I might put a big piece of paper on the wall and ask everyone to write on it whenever they notice technical debt. The bigger the issue, the bigger the note. Maybe use different colors for different parts of the system. After a while, patterns would emerge. But the real purpose of this would not be to track accumulated debt; it would be a device to encourage people to talk about it and think about it.
"It's managed by the thought process of the programmers." -- In enlightened organizations, sure.
Is it unprofessional to allow managerial deadline pressure to influence code quality ?? At what point is it justified for a programmer to say "No"? At what point is it wise?
Is it unprofessional to allow managerial deadline pressure to influence code quality
Actually, yes, I think it often is. If you're paying me to write code, then it's incongruent for you not to rely on my judgment about how to do it. If you don't like the results I produce, you're well within your rights to find somebody else. But you are not within your rights to hire me and then block my judgment about what the system needs.
The vast majority of the time, it's not a matter of explicitly saying "No": if you even get into that conversation in the first place, it's probably game over. Asking for permission to do a good job is likely to meet with, "No. Do a faster and cheaper job instead." That's an argument the programmer will never win. And rightly so, one might say! - because of the lack of self-respect they're demonstrating by such behavior. One thing I've learned over the years is that it's my own fault if I don't stand by what I value and know. (Well, partly learned, at least.)
If you want to be professional, don't act like a servant.
Edit: I was thinking about why this is by and large so poorly understood by software developers. I think part of it is that programmers tend to be younger. They haven't had time to learn this the hard way, and there is a large authority and status gap between them and their (usually older) managers. Perhaps another aspect is that many programmers are introverts and thus less likely to advance their own interests and judgments, even when they're right.
I find this interesting because those are both reasons why the hacker renaissance and the corresponding startup explosion are really good things. When the hackers own the product, the situation we're talking about is much less likely to arise.
Tech people can be as bad as anyone else when it comes to this kind of stuff. Put one hacker "in charge", and make another hacker a subordinate, and suddenly you've got these kinds of authority issues. In fact, they might be worse, because hackers are always fond of thinking that they can do things faster and better than the other guy, and they're especially fond of being confrontational about technical issues.
Other than that, I agree with you. Last year, I heard a talk given by two co-founders (one technical, one not), who were discussing their working relationship. The tech guy got a big laugh with this line (paraphrased):
"Every once in a while, [my cofounder] gets on my nerves with questions like 'why can't it be done sooner?' It's really helpful to be able to answer 'because fuck you, that's why.'"
That's a luxury that most programmers get to experience, unfortunately.
To me 'managing it' means managing risks, which often means managing long-term expectations. Debt can't exist without risk, even the technical debt mentioned here. The way you manage that risk really depends on project-specific variables - timeline, budget, staffing. You might allow a grievous hack to get something to market now, with the expectation that you will have more staffing to clean it up later. Or you may invest in a robust design now because the expectation is for staffing to be cut to 1/2-head for the next 3 years, and you need something that can adapt under that risk profile.
I leave a lot of TODO and FIXME comments in code (which are then tracked and highlighted by my IDE). These range from code that may actually be broken on certain rare or unexpected corner cases (though not dangerously so), to known inefficiencies, to areas where the rough idea for a much better approach exists but can't yet be fully developed.
Very roughly, they are a measure of known technical debt.
Same here (in gvim). I also like pylint's default behavior of issuing warnings where it finds these and penalizing the code's final score in the report -- it's a proxy for code problems that have been silenced or worked around, but still need some attention later. Integrating this info with a bug tracker seems like a good idea too for sufficiently complex projects.
I think I wind up adding new ones at about the same rate as old ones disappear, but I'd have to run a report over the history to know for sure.
It's always nice when I come across a note that's been made irrelevant by other parallel improvements along the same theme -- even though the note wasn't the impetus for the other changes. It's like finding a $20 on the street!
I had seen a Martin Fowler article on technical debt before but the link to Ward Cunningham is new at the least.
As a part-time economist, it occurs to me that the debt metaphor actually doesn't work on some levels: Companies constantly increase their monetary debt, since THAT can be paid through increased profits. You actually wouldn't want a company that grew without increasing debt - that would be wasted resources. Technical debt is much worse to accumulate since it can only be paid by increased development time.
Also, one accumulates technical debt whether one wishes to or not. This is because often the only way to understand how to do a system right is to first do it wrong.
I'm not sure debt is a good metaphor for code complexity.
Debt grows logarithmically, if the complexity of your code base is growing logarithmically you're doing great.
The problem is software complexity grows exponentially.
And that's why you can't manage it like you can manage high interest debt.
You can try to manage it, but I have yet to see that management succeed.
Every single time a company has tried to manage technical debt, the debt simply goes unpaid.
How many people here who've worked on a sizable and mature code base can name temp files and variables that have turned absolutely permanent over the years?
That's why software complexity, in large organizations, should be approached like an infestation. Erase and eradicate on sight, don't try to save some for later.
Great article. It reminds me of another dilemma I came across earlier this week.
I had a 280MB xml file containing about 120,000 records. I needed a way to parse out a subset of the records (about 15,000) and then put their data into a db.
I was developing on a vps with only 256MB of RAM, so I wanted to avoid memory intensive operations. I'm using ruby, so I started looking into how to 'stream' the xml with ruby in such a way that I could read if it was the type of record I wanted to keep, somehow manipulate and store the data and move to the next record. The more I looked into that strategy, the more complex and ominous it seemed. I just really wanted to find a simpler way that wouldn't require all the apparent tediousness of streaming the xml file (it may seem simple enough, but there are dependencies you have to install, api's you have to read and learn, etc). There were just too many moving pieces and it made me feel eery about the outcome. Another aspect of it was just pure laziness. I just don't care how I get that data into the db - I just want to get it in there.
Fortunately, the glorious wonderfulness of linux utilities saved me from a long tedious solution. Behold:
This splits the large xml file into sub-xml files that start with 'xx' by default, followed by an incremental number, xx10004 for example. It splits the file based on the <title_index_item> tag - which is the tag for the items I want. See `man csplit` for more info...
The 'find' lists all the sub-xml files, then we grep for the filenames that do not contain what I'm looking for, then delete those files. I'm left with a directory of 15,000+ xml files with just the type of data I'm interested in.
find . | wc -l
This command is just so I can track the progress of the operations. Obviously `ls` would return too much data, so we pipe the file list to `wc -l` which gives us the number of lines - which in this case is the number of files in the dir.
So, two lines of code on the command-line instead of a far-more complex ruby/streaming-xml solution. Now I can have a ruby script process each individual file and add the data to the db - a much simpler problem to deal with.
My point is that you can accumulate code debt by doing the seemingly 'right' solution sometimes. There are probably coders that will cringe that I just used a couple of shell commands to do this rather than write up a long, well-documented, properly OOP, TDD, etc "right" solution. However, the "right" solution in that case would have accumulated code debt - more code to maintain, more moving pieces, more things that can go wrong. I'd trade maintaining two lines of shell code over 10's or 100's of lines of ruby code and it's dependencies any day of the week.
>There are probably coders that will cringe that I just used a couple of shell commands to do this rather than write up a long, well-documented, properly OOP, TDD, etc "right" solution.
No, I don't think so. This is not technical debt. This is simply the right way to do it. More complicated is NOT better! A lot of people, for some reason especially people who like classes and MVP and templates think that more complicated is better, and will write insane amounts of code to do a very simple thing. I think that's wrong.
The reply is usually something along the lines of it'll be easier to do this and that. But they are not doing this and that, they are doing something simple. Add the complexity when you need it - not in advance.
In similar situations I just use a SAX parser, that's what it's for. I don't know about Ruby, but it's quick and easy in Python. And once you've written one SAX application, you can easily use it as the basis for the next.
Totally. If I had had to do anything much more complex, I may have had to resort to SAX. I was just too lazy to mess with it for this - and I'm a bit OCD sometimes about writing the least amount of code possible ;)
Given that you are programming in Ruby, you can make your shell script part of a Ruby script that then calls the code to process each individual file, no? Ruby gives you best of both worlds in this case.
Gerald Weinberg refers to this same concept as "maintenance debt" in his four volume "Quality Software Engineering Series" from the early-mid-90's. I think it's an independent formulation of the same concept. The volume most concerned with it is Vol4 "Anticipating Change" http://www.amazon.com/Quality-Software-Management-Anticipati...
the article is good, but i can't help the feeling that it is just applying a clever name to a common sense principle that is relevant to just about every field (including finance and technology)
"Technical Debt" is a surprisingly self-explanatory term. I have used it to great success with my management team when arguing for time for refactoring, code reviews, and other non-user facing work.
Exactly. technical debt is a great tool for explaining to non-technical management why you'd want to invent time in refactoring, reviews etc that have no obvious tangible result in the finished product (short version: it's a drag factor makes it harder to add features and keep the bug count down later).
It's also a good tool to allow the management to do what management does best: take the technical information and make management/financial decisions based on it, on how much investment can be made now.
It turns a binary either/or choice: devs saying "Our code is bad, we need to rewrite" into a quantitative one: "How much technical debt do we have? Is it increasing? Can we reduce it?".
The video of him talking about the debt metaphor is worth watching, especially the last minute where it gets subtle.