Hacker News new | past | comments | ask | show | jobs | submit login
On Programming Deadlines (rdegges.com)
65 points by craigkerstiens on Nov 1, 2011 | hide | past | favorite | 22 comments



I strongly agree with all the 5 rules Randall pointed out. But, IMHO, the very first step to really understand those rules is to dramatically fail on every one. [this is my first comment on HN! :)]


I think the article is fine as written, but in the interests of expanding on its ideas, I submit that writing tests first is a design technique and maintaining substantial test coverage is a regression-proofing technique. Both also serve as documentation.

I mention this, because these three objectives--design, regression-proofing, and documentation--can be achieved in various other ways besides writing tests first, and it can be helpful to separate them in your mind even if you choose to practice TDD with great discipline.


I always hear that TDD is good for documentation, but most of my unit tests cover odd corner cases, weird error conditions and the like. It certainly isn't a show of what to do. What do you do with your unit tests that make them high quality documentation?


That's often true of tests that were only used to check the validity of code, not its design.

In many uses of TDD, you will use somewhat broader specifications to test things. "The login system accepts any user with a valid name by returning '1' and returns '0' if the password isn't the right one". You can have a test to verify that without using special corner cases, and a test such as this one will likely look like any other normal use of your library or function.

If you develop all your requirements using TDD, then both the corner cases and the standard cases should be sitting in the test suite. The standard cases can act as documentation and a design tool while corner cases are really more there to insure things work absolutely right.


Setting up deployment routines early is a seriously great tip. It took me years to realize that, and to start doing it.


I read so many meta-programming articles that try to make a science out of "getting shit done."

While there are techniques that aide in the process, your time is better spent trying to get the aforementioned shit done.


Lots of good points. If you liked what you heard in this post, I recommend reading The Clean Coder. Uncle Bob has a lot of great little anecdotes when it comes to dealing with non-technical stake holders and deadlines.


The part of the story I don't like is where the boy gave up looking for the dog after an hour.


Did you comment on the wrong article?


That was my favourite part!


I don't have time to implement TDD.


TDD is particularly valuable (I'd even say unavoidable) when writing modules, libraries and so on. It's extremely helpful on some tough program parts that aren't enough defined in behaviour by the spec. Writing tests first in these case will avoid you countless traps, particularly in "the other 90%" as mentioned in the Cargill's law:

    "The first 90 percent of the code accounts for the first
 90 percent of the development time. The remaining 10 percent
 of the code accounts for the other 90 percent of the 
 development time."
    —Tom Cargill, Bell Labs


I can see how TDD could be useful for writing a relatively static piece of code, especially code that is likely to be reused by a lot of people.

But I develop web applications in 1-week sprints for my stakeholders and they're likely to change their mind about what they want at short notice.

In these circumstances it simply doesn't make sense to write tests up front. I don't even have time to write a lot of tests when I'm done :-(


How long do you spend manually testing? I find that developing automated testing(note I don't really care about TDD just unit testing and automated testing.) takes a little longer to get setup and running. Subsequent testing is cheaper because regression testing is free, and test code can be reused when developing new tests.


The counter argument is, of course, that you don't have time not to.

Many programmers who begin TDD have the same concern as you. Many of them find that over time they're more productive over the long haul with TDD. It takes a little while to arrive at that realization.


I'm prepared to concede that there may be some long-term benefits to be gained, e.g. in reducing the amount of maintenance work carried out over the lifetime of a project.

However, writing detailed tests up front would completely destroy my "flow" when implementing features in a web application. It would certainly take me more time because there is going to be a large increase in the amount of code to write.

I simply don't have the time available to do this. I get a few weeks to knock out an application and then I'm on to the next thing!


Depends on the project.


Please be specific.

In my experience, even if you are building a quick'n'dirty prototype, you'll always need to fire up a terminal or run the GUI to interact/play with the prototype you are building, to see if it "works".

In this case you may apply a --this is my very own definition-- "lightweight TDD approach" just to automate all the procedures that you normally do to test your prototype. Moreover, since you are writing a test for something that does not exist, it may help you defining better what you want from your experimental code, because you are using your prototype before it even exists :)


TDD takes longer. Studies conducted at Microsoft and HP (or possibly IBM), found that development time with TDD was 10-30% longer. That's probably minimum, and in high degree determined by the TDD experience level of the team members.

The massive upside is that there is much less defects in the end product (think it was 50-80% in the study, but can't recall exactly).

Depending on the project, the extra time needed, might or might not be worth it.

Take a somewhat complex web app, that is used for a marketing campaign. It has a life time of only 2 months. In this case it does not make sense to use TDD, as the lower defect rate and "future-proofing" qualities, might very well not be worth an increased budget.


Yes. My experience with TDD is that it did save time, but not necessarily in initial development. The long-term effect of fewer defects combined with the capability for a higher release velocity is a net savings.

As a counter example, I'm working with a company that uses TDD for their current development, but employees work on a platform with little test coverage. As a consequence of lacking unit tests, they're only able to release a few versions each year because their manual testing burden is so high (estimated by one manager as about half of the cost of a release). They've found that TDD for their new features doesn't escalate the QA cost as significantly as not doing it. Bottom line: it's a little more time up front, but they've found it to be a long-term gain.



I'm curious, are you against unit testing in some scenarios or just TDD? I'm sure we can think up some conditions where the cost won't be returned. Is that all you mean?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: