Hacker News new | past | comments | ask | show | jobs | submit login
Test-Driven Development Bypasses Your Brain (stoneship.org)
55 points by ddfreyne on March 16, 2013 | hide | past | favorite | 65 comments



Personally I completely disagree with this, I've never found myself randomly changing code in a desperate attempt to get a test to pass.

Maybe it's because I'd been coding for years before I ever tried TDD, but when a test fails, I logically debug the code the same way I would if I wasn't using TDD.

As far as I'm concerned having tests just flags possible errors much quicker, and also gives me more piece of mind that my code isn't gonna be riddled with hidden bugs.


An often touted "benefit" of TDD is that "addictive" feeling when you write tests and see them pass. "you feel like you have done a lot because you have a lot of code"; "you feel a great deal of accomplishment". Quite a few pages talk about it when you search for "tdd addictive".

The canonical example is the master of XP solving Sudoku in the TDD way: http://xprogramming.com/articles/oksudoku/ (part 1 out of 5) -vs- Peter Norvig: http://norvig.com/sudoku.html


I also disagree with this article. I think the point of TDD is not to blindly mash at the keyboard to make tests to pass and TDD encourages you to go back and refactor code and tests after you get it passing. TDD gives you confidence that the functionality you developed yesterday doesn't break when you add more functionality today.

TDD isn't a be all and end all, its just one more tool in a developers toolbox that allows us to be better at your job. If you solely rely on TDD or [Insert newest popular development technique] you are going to have a bad time.

One place that I've found TDD to be insanely helpful is exposing the interaction between pieces of the system before building it out in code. I tend to write my tests from the bottom up, i write the assertion first, and then build up the stuff i need to test that assertion. This makes it easier to see whats needed to test the functionality and whether or not the test looks good or not.

Gary Bernhardt does a really good job at explaining his philosophy on TDD, which I really agree with, http://destroyallsoftware.com


The other thing to consider is that if you don't know what changed between code change and test running, you aren't running your tests often enough. When you make a change, you test that change. That change should be small enough that it makes it clear where the problem lies.


I believe it boils down to a matter of discipline.

If you say that you never randomly modify code in order to make it work (according to the tests), then that is great, and that is how every developer should work. However, my experience is that people who lack discipline (or face a looming deadline) tend to not take their time to properly reason about code and instead rely on the test suite to tell them whether code is right or not.


The author made one slight mistake: he wrote "there is a tendency to mindlessly modify code" instead of "I have a tendency to mindlessly modify code".

Also, it's not like this we haven't seen this kind of behavior decades before the invention of TDD.

This is just another example of a craftsman blaming his tools. TDD is not a silver bullet, but no method or tool can serve as an excuse for mindlessly poking around until it works. This isn't limited to programming either.


If I only were in the past. I've seen this behavior with coworkers, changing random bits of the code, without any coherent system to speak, rerunning the application from scratch and manually testing if it works now.

I can't describe how shocked I was.


I admit I did fall into that trap once, long ago. I’m very aware of the consequences now.

The reason for writing the article, however, is because I have seen the same mindless behaviour with other people as well.


He also completely overlooked the refactoring part of the equation, too: write test(s), write code that passes all the tests, then refactor the code until it's shiny enough and still passes the tests.


I also have the same problem the author has sometimes: If I know that there are a lot of tests covering a particular piece of code, I tend to be less diligent making modifications


I don't recall ever reading that just because you have tests, you should no longer understand the processes by which your code functions. Was this something that they've seen happen, or experienced personally?


I have occasionally been tempted into the mindless code modifying that I described. That is in the past though!

More importantly, I have also seen this happen in professional environments. A very large test suite is very useful, but it is absolutely not a catch-all safety net.


I suspect the linked article is a straw-man built to provoke responses, and thereby create page views for the blog.


I (sadly) average 1 article written per 3 years, so there wouldn’t be much of a point in creating page views.


But you still try. =)

The title contains that "bold statement" to incite a response.

There are numerous fallacies in your article. I believe it's a misunderstanding of certain aspects.

> writing code in a test-driven way bypasses your brain and makes you not think properly about what you are doing.

You should not just start writing code blindly. You should have a clear understanding of the problem up front. When you start coding, it should be done after you have a plan.

> Furthermore, true 100% code coverage by tests is a myth: no matter how many good tests you write, not all cases will be covered.

Code coverage measures the code you've written tests for. It in no way promises to cover all cases. This is not a deficiency in code coverage, merely it's understanding.

> Therefore, mindlessly modifying code until all tests pass is likely to introduce new bugs for which no tests exist.

Ignoring the other parts of this that make no sense, I propose that mindlessly modifying code without tests will introduce bugs.

> Algorithms must be understood before being modified, and modifications must be done as if no tests exist at all.

I don't understand this. Of course they must be understood. TDD does not remove this requirement. I'm also not sure how modifications must be done as if no tests exist? Maybe you mean to suggest that optimizations in algorithms must be applied all at once, and cannot be made in small, incremental changes?

> You apply the optimisation, and some tests start failing.

Whereas if you did not have tests, you might not know this.

> But how can you be sure that the algorithm still works? How can you be sure that the mindless modifications did not introduce edge cases that were not tested before?

How can you be sure that your algorithm worked before in all cases? How can you be sure, without testing, that your changes still work?

You really are making a straw-man. You are effectively arguing that TDD doesn't prove something that TDD doesn't promise. In fact, your premise - "no matter which software development methods you use, do not forget to use your brain." - and your title imply clearly that TDD doesn't encourage using your brain.

That's most assuredly not true.

P.S. I hope I don't sound harsh. I'm not trying to belittle or insult you. =)


Yes, it's all a big conspiracy. Maybe there are some aliens involved too.

Why do some people always assume bait/trolling when no such thing appears even remotely possible (as is this case, which is a well reasoned and argued post), is beyond be.

You might disagree with the author, but we tries and provides arguments for what he writes.


I suspect your comment adds absolutely nothing to the conversation. If you disagree with the premise then how about explaining why you think TDD is the panacea for poor software quality.


The "bold statement" is a little too bold. It goes from:

  |  writing code in a test-driven way bypasses
  | your brain and makes you not think properly
  | about what you are doing.
(Test Driven Development makes you not think properly and bypasses your brain) to:

  | no matter which software development methods
  | you use, do not forget to use your brain
"Just don't mindlessly program."


TDD is good for verifying that your code handles the set of requirements given by the customer - including any edge cases that matter to them. I probably agree that 100% test passes doesn't equal no bugs.

Nonetheless, it's still useful! You can still write TD code and use your brain - it is only slightly easier to be lazy (and specifically, lazy in a way you're not supposed to care about, yet.)

In the end, production use crash reports will reveal any bugs that matter in the system (if any), and you can write new tests for those extra cases and make the code pass again. Combined with the rest of Agile (sorry,) i.e. fast release cycles and so on, this isn't a road block.


> I probably agree that 100% test passes doesn't equal no bugs.

TDD never promised that, and practitioners of TDD understand that 100% coverage doesn't mean you won't have bugs. This doesn't invalidate the TDD or testing (as you are obviously aware of =)).


Sure you will still have bugs. The question is whether the reduction in bugs due to TDD outweighs the increased investment in developer and tester time.

Because for those of us who do TDD every day the blowout in time is at minimum 2-3x longer than without it. Not to mention the detrimental impact on build times.

All of that aside. Have you noticed how there are no decent metrics available for TDD's effectiveness ?


> Because for those of us who do TDD every day the blowout in time is at minimum 2-3x longer than without it.

I do not know your environment, but TDD does not add 2-3x longer for most everyone I know that practice it. This is especially true when you factor in total development time. Most estimates I see place TDD making the project take 15-30% longer.

> All of that aside. Have you noticed how there are no decent metrics available for TDD's effectiveness ?

Not sure what you mean by that. There are metrics you can use (how else could they do studies on this?). It's been proven time and time again in studies (some are linked in these threads here).


It's been proven time and time again in studies (some are linked in these threads here).

I see that claim a lot, but when I look at the "studies" being cited, they rarely stand up to even cursory scrutiny about their methodology and the choice of test subjects.

These studies (or those making a case for TDD based on them) tend to do things like generalising results based on a few undergraduate students writing toy programs to draw conclusions about professional software developers working on industrial-scale applications, or using no unit testing at all as a control rather than say doing either test-first or test-last unit testing but not following the TDD process.

If you have better sources, please do share them. Developing a non-trivial application using two different techniques is rarely cost-effective, even without controlling for having different developers or experience in the two cases, so getting good quality data about the effectiveness of software engineering processes is hard.


I think the problem mostly spans from the "do the simplest thing that could possibly work"[1] methodology that some practitioners of TDD advocate over thinking about the problem and solving it properly.

[1]http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.htm...


The problem isn't the advice, it's the misunderstanding of that advice. Thinking about a problem should happen, and when you sit down to code, you should already know what needs to happen. TDD doesn't propose to replace planning and thought.


Fair enough, I do admit my experiences with TDD are pretty much limited to writing the game of life several times at a code retreat where thinking too much ahead of time was somewhat verboten and talking with TDD practitioners that suggest the best solution to solving a problem is to write some tests, then take some "baby steps" until the problem is solved. I always get the impression it seems to lead into a somewhat absurd situation, such as the one described in [1].

What do you think would be a good reference with regards to TDD practices, as opposed to "I saw some people do it and it looked seriously wrong?"

[1] http://programmers.stackexchange.com/questions/109990/how-ba...


I've always viewed TDD as a process that works for some people. It's always important to remember that people learn, develop and think differently. If TDD works for you, great. But do not force it upon other people, as it may not work for them.

(This isn't to say that unit tests are bad, but rather writing tests first may not benefit all people)


This sounds a bit like "we don't need no stinking testing", but I know the author is trying to hit at a deeper point. I only wish he had done better.

One of the problems here is language: TDD as a general concept can cover everything from high-level behavioral testing to a method-by-method way to design your program. There's a big difference between those two!

In general, of course, programming is balancing what the program is supposed to do with how the program is constructed. That's true whether you have TDD in the mix or not.


Good luck doing TDD with behavioral tests. Running (eg.) Selenium tests repeatedly is only going to slow you down.


Tell me about it! I am in a project right now where we are required to run BDD tests (Cucumber) which hit real servers (no, mocks won't do it). Worse thing is, the 3rd party "ESB" we are using takes forever to shutdown and startup... and is being restarted a lot in the tests (someone else is doing the BDD tests as an "acceptance criteria").

The result? running the complete BDD test suite takes about 5 hours, which must be checked for every commit.


This doesn't sound very brain or productivity healthy.


I'm inclined to agree that it is hard to create an algorithm using tdd (for example Dijkstra's algorithm). But "the example" mentioned in the post is not grounded. It would be nice if someone had a real-world example to back up this claim or else it is very easy to bring up the argument that the author is not applying tdd correcly


I find TDD to be useful in two cases:

1. When I already know what I'm doing and it's just a matter of coding what's already in my mind

2. When I'm writing in a dynamically typed language, it forces me to be not lazy and have adequate test coverage since I don't have compile time type safety

I do less of TDD when dealing with a statically typed language and/or when I'm working in an exploratory mode. TDD doesn't help me when I'm just trying out different things to get going.

The thing that pisses me off is when people don't realize that EVERY technique has caveats and try to promote it as a golden rule - a lot of "agile" consultants preach TDD as the golden grail for writing code without any bugs.

EDIT: grammar


  1. When I already know what I'm doing and it's just a matter of coding what's already in my mind
A concept often used in TDD is spiking. If you don't know what you're doing, do a quick and dirty untested version until you do know what you're doing. Throw that code away and TDD it with your new found knowledge.


http://www.dalkescientific.com/writings/diary/archive/2009/1... is a much better article about the problems with TDD.


Hacking code to fix problems isn't unique to TDD. I see people do it all the time to codebases that don't have tests.

If your goal is to fix this behavior, go for the root causes. TDD isn't a root cause for this particular problem.


I've been mixing in TDD and BDD for the last 1.5 years of my 11 year coding career. I can't think of any reason not to test except for laziness and someone's unwillingness to truly use their brain to evaluate it's value.

Contrary to this article, one great reason is that TDD/BDD allows me to make refactors and major changes and know whether or not I broke something. I find it passe to have the opinion of this article.

A perfect example for TDD/BDD is a complex REST API with dozen of end-points and refactoring a piece of the authentication system. How do I know if I broke something or introduced a bug?

My experience is that most developers do not test and this is exactly the kind of way complex bugs get introduced. You actually make the job more difficult on yourself because instead of knowing YOU broke something, a bug gets introduced and you spend more time tracing the cause. I have worked at many places that have this obnoxious cycle of deploying, breaking, deploying, breaking.

It is irritating to see articles like this pop up because it's not like it's a school of thought or a religion. It's a purposeful tool that can and will save you time and effort and probably impose a few good design practices along the way. I'm not saying shoot for 100% coverage, fuck, I'm happy just knowing a few complex pieces are working. And I don't always think it's a good idea to design API's from the tests, especially when you are experimenting and researching.


Your "perfect example for TDD/BDD" is actually about testing in general, not TDD. You are stating the value of having a test suite when making a large change, not the value of writing tests first.


Sure. I guess I forgot to also make the point that the best way to write tests is to do it as you write the code you are testing. Otherwise the tasks becomes somewhat tedious and intolerable.


I think this is a more general problem in programming, namely "Programming by Coincidence" [1]. Some people just tries to solve the problem without actually thinking about it, but just tries match the output specification.

[1] http://pragprog.com/the-pragmatic-programmer/extracts/coinci...



This article misunderstands TDD completely. In TDD, the tests are your specifications. Therefore, any code that passes the tests is formally correct - even though it should always be minimal (YAGNI).

In fact, TDD is not simply "tests first". It is: write ONE test, make it pass with the MINIMUM amount of code, refactor, loop.


Usually this makes people go for very simple solutions without thinking properly what are the right data structures and algorithms for the problem at hand.

I rather write proper designed code and write the tests afterwards, before delivering the code.


True, however the solution is ok since passing the test is the only quality you need.

If not, write a new test, make it pass. The naive implementation can be substituted with a different one easily since the tests guarantee correctness.

Generally though, since the "third leg" of TDD is refactoring, this ensures that the proper structures are going to be used in place as soon as they are actually needed.


Have you ever tried to apply that in a big enterprise?


No one is disputing that the code is formally correct. The problem is that the code is generally focused on those specific tests and those tests alone. Meaning the code hasn't been designed or architected with a broader context in mind.

Hence over time the codebase becomes this huge tangled mess of "formally correct solutions".


FTA: Algorithms must be understood before being modified...

I would add to this that algorithms must be understood before being tested, something with which I suspect most TDD proponents would agree, and which would dispense with the need for the rest of the article.


Could we please stop arguing? This back-and-forth with absolutes is akin to useless political campaigning. http://blog.8thlight.com/uncle-bob/2013/03/06/ThePragmaticsO...

(More specifically, read everything from the "The Pragmatics: So when do I not practice TDD?")


I agree -- I've found myself in that exact case that he described (mindlessly adding and subtracting one on various loop indices until it worked) more than once.


The same argument would apply to a good compiler. And that is exactly how I think about tests -- kind of a way of extending the compiler.


Dijkstra's quote reminds me of Knuth's "Beware of bugs in the above code; I have only proved it correct, not tried it."


If you're coding mindlessly doesn't that by definition mean you've bypassed your brain?


TDD in theory is a great idea. In practice it is dreadful.

Because what has happened is that the obsession with code coverage has meant that developers create a whole raft of tests that serve no real purpose. Which due to TDD then gets translated into unworkable, unwieldy spaghetti like mess of code. Throw in IOC and UI testing e.g. Cucumber and very quickly the simplest feature takes 5x as long to develop and is borderline unmaintainable.

It just seems like there needs to be a better way to do this.


The thing about practice is that it takes practice. Here are the issues you raised:

- Focus on code coverage

- Testing nothing

- Spagetti code from doing TDD

- IOC + UI/Cucumber takes long to write and run

I would have to say I agree, there is a better way to do this. My guess from your last statement is that you are relatively new to software. Don't mistake your teams poor practices for the practices not working. Try to promote better practices.

Tell your team code coverage only informs you on what isn't being tested. It doesn't help with quality.

Tests, like code, should be deleted if they don't do anything. Strictly adhere to YAGNI,

If TDD is producing spagetti code, you are doing something very wrong in your tests. The tests should be short and focused, just like your code base. Those tests are hard to write on a messy code base, which forces you to refactor, which leads to clean code. Maybe read up on the SOLID principles and other code quality practices to see what you are missing. Refactoring techniques can be very helpful too. This takes years to get good at.

Cucumber is over used. Read about the testing triangle (http://jonkruger.com/blog/2010/02/08/the-automated-testing-t...). My guess is that your team is focusing on the top levels. Those tests provide little long term value, fail without good explanations and can be complicated to write and maintain.


Unit tests are still useful sometimes. Everyone, when they first start out, goes overboard with how many tests they write, and can't tell the difference between what should and shouldn't be tested. The first couple of projects that are unit tested for a developer tend to have so many tests that are brittle that it slows the entire process down.

What I do now is, well, I'm going to actually test stuff while I'm coding anyway right? Regardless if I'm doing TDD or not. Well unit tests give me a useful harness where I can write those tests, instead of hundreds of Console.WriteLines. It's basically not much more effort than Console.WriteLine() style "testing", except you are left with some reusable artifacts at the end that may come in handy later on.


You're supposed to refactor your code. Layering tests on top of tests means that you end up spending more time maintaining tests than writing code.

Also, don't test stuff that isn't going to break, and avoid writing system and UI tests unless you absolutely have to.


People always ignore the refactoring. They also look at the time TDD adds merely in terms of individual sessions. They don't look at the project as a whole.

> Also, don't test stuff that isn't going to break

Hah! =) But how will I prove that i++; is actually incrementing i!


Well, if the incrementing is part of a method somewhere, it probably should be tested, but as part of testing the method, not the ++ itself.

Usually it's more like people testing the basic mechanisms of frameworks or libraries. In some cases it makes sense, eg. if a library that you're depending on is a bit dodgy, but usually it's just a waste of time.


The problem is that that right now in the software industry:

TDD, Agile, Scrum, XP etc are a religion.

And a lot of people have managed to make their lives easier by making the teachings of this religion mandatory. So what I've been witnessing the last few years is that saying "no I don't think we need a test for this" is a position that will get you no where. So instead every one just puts up with longer and longer build times and spending more time each day fixing broken tests.


That's an overly cynical take - I've seen both TDD, Agile and Scrum work really well. In any case, refactoring is part of the religion too, so it shouldn't be too hard a sell for you.

And it'll fix build times and broken tests! :)


Come work to the enterprise world where no one cares about whatever the cool kids are proposing.

If the customer does not request for unit tests on the contract, usually no program manager is enforcing people to waste time writing them.


I would argue that this type of discourse is not useful. There people putting effort into properly assessing the usefulness of using tdd. Personal anecdotes are of little value. See http://blinkingcaret.wordpress.com/2012/10/02/tdd-bdd-add-ev...


It sounds like you've had some bad experiences. I'm not sure that you could attribute unwieldy spaghetti code to the use of TDD though. Do you believe the projects you've worked on with TDD would have been in better shape without TDD?


Moral of the story: Coding mindlessly can be almost as bad as blogging mindlessly.


Fully agree!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: