>in reality making code more testable probably mostly amounts to breaking it down into simpler, functionally pure bits.
I know. The problem is:
* This process often introduces bugs. How are you going to catch those bugs? Not with your tests, you're changing this code precisely so you can write tests. It's a catch 22.
* Sometimes people do this only to discover that the simpler "functionally" pure code is pointless to test because it's so trivially simple. Somebody literally did that today on the code base I work on. The code as a whole still has bugs but those tests won't ever catch one. They'll just break when the code changes. Plus that "refactoring" probably introduced bugs. This I think is what the concept of "unit test induced design damage" was getting at.
This isn't a problem with tests as a whole. Or TDD. It is partly a problem with people who use the terms "unit test" and "test" interchangeably (this engenders entirely the wrong kind of thinking). It's mostly a problem with unit testing as a concept (i.e. not the specific frameworks themselves).
Having nice clean code interfaces is also often conflated with unit testing - this is a mistake. One does not necessarily lead to the other.
Michael Feather's "Working Effectively with Legacy Code" goes into some detail on this. When you have bad, untested code, you don't start refactoring. You start writing high level "characterisation tests", then you refactor. After that you can still write the unit tests for the better components.
I've followed this process a few times and then noticed that the last step didn't really add much value.
It felt good at the time doing it because that's what I was "supposed" to do. I'd achieved the supposed "testable code Nirvana" and... meh.
The first step was life (or at least, career) changing though. Bringing a piece of shit code base under control with integration tests was a process that blew my mind.
That's what led me to start questioning the efficacy of jamming architectural changes into code in order to sacrifice at the altar of the unit testing gods and that maybe, just maybe, unit tests' steep demands and limited value means that they suck.
I know. The problem is:
* This process often introduces bugs. How are you going to catch those bugs? Not with your tests, you're changing this code precisely so you can write tests. It's a catch 22.
* Sometimes people do this only to discover that the simpler "functionally" pure code is pointless to test because it's so trivially simple. Somebody literally did that today on the code base I work on. The code as a whole still has bugs but those tests won't ever catch one. They'll just break when the code changes. Plus that "refactoring" probably introduced bugs. This I think is what the concept of "unit test induced design damage" was getting at.
This isn't a problem with tests as a whole. Or TDD. It is partly a problem with people who use the terms "unit test" and "test" interchangeably (this engenders entirely the wrong kind of thinking). It's mostly a problem with unit testing as a concept (i.e. not the specific frameworks themselves).
Having nice clean code interfaces is also often conflated with unit testing - this is a mistake. One does not necessarily lead to the other.