Testing is all well and good, but I worry that software is getting into the trap that Manufacturing learned to avoid decades ago: you can't test quality into the product.
Most blog posts about software quality seem to focus on testing. While testing is certainly a component of a good development process, it's really better to avoid injecting a defect than trying to be sure you execute tests to find it later.
Can't agree more with this sentiment. Current software engineering practice has moved from focusing on the software itself to testing the software. Even peer reviews seem to be taking a back seat these days to 'make it pass the tests' engineering.
1) Your tests are just as likely to have bugs anyway.
2) The 'bad' bugs are logic bugs - if your logic is wrong, your tests are wrong, so you are testing the wrong thing.
3) Users often do things you didn't plan for. If you didn't plan for it, you didn't create a test for it. 100% code coverage does not mean you have tested every possible input, only that you've tested every possible branch. You can still go down the wrong branch/get the wrong result.
4) If you design first and test later, you're more likely to have designed real logic into your system. TDD will let you re-factor more easily - DDD (design driven development) will make it so you don't need to spend months re-factoring in the first place.
Basically: software engineering in general needs more architecture and less ticking checkmarks - we are building space escalators here, we are not building chair legs. There is no need to re-invent the chair leg, "but this time with more tests".
I totally agree that tests aren't a replacement for thoughtful design, but neither is thoughtful design a replacement for tests. I think they complement each other. Thoughtful design allows you to model how you expect the software to work and your tests help you validate that model.
1) Your tests are just as likely to have bugs anyway.
This is partially addressed by the "red-green" part of TDD's process. I catch many bugs in my unit tests by ensuring that the tests start out red and become green only after writing the code that's supposed to implement what's being tested. Yes, it's possible that some bugs will still exist in your test code, but I think it significantly reduces the probability that a bug exists in your application code.
4) If you design first and test later, you're more likely to have designed real logic into your system. TDD will let you re-factor more easily - DDD (design driven development) will make it so you don't need to spend months re-factoring in the first place.
I agree with the first part, that putting some thought into design upfront is likely to improve the overall quality of your software, but I don't see how that necessarily implies "test last." I like Domain Driven Design [1] (another DDD) for upfront design, but I still use TDD when implementing the design. I've not heard of Design Driven Development before. Is that another name for Domain Driven Design? Its Wikipedia page [2] is lacking substance.
A word about refactoring: I too find that putting thought into design before implementation reduces the need for large refactorings later. I agree with you there. However, I've never found that TDD helps me refactor "later." With TDD, I employ many micro-refactors during implementation. In other words, DDD provides the outline, TDD fills in the details. Alternatively: DDD is designing in-the-large, while TDD is designing in-the-small.
You and nikita below are both right that having tests is much better than not having tests, and the argument isn't really about choosing design or tests - you definitely want both.
I was just trying to address the same point that HeyLaughingBoy above was addressing - tests are not a replacement for design; quality is not about tests. Tests are a tool to improve productivity in a large code base across many developers, and they're also a useful tool for a single programmer to use to focus their development. However, tests themselves are not a mark of quality, and any actual bugs that tests catch are just nice extras - and if you follow TDD, they will never catch bugs, since you will code until the test passes, bugs and all.
So my point is very simple: quality is in design and implementation. Tests are just for increasing engineering efficiency - you can have the same quality without tests as long as you have enough time (manual QA). You can't have the same quality without design, even with all the tests in the world.
> Quality is in design and implementation. Tests are just for increasing engineering efficiency.
Yes, precisely. I couldn't have said it better myself. Your original comment came across as discounting the importance of tests a lot more than I think you meant it to.
But I was also trying to make a couple points about TDD.
The first is that watching the tests fail before writing the implementation increases your confidence that there isn't a bug in your test code. This was just to address your #2 bullet point. But this only treats tests as a verification tool, which is orthogonal to good design. Since we agree that tests are for efficiency and we're discussing design, it's not worth dwelling on this point.
The second point is that TDD is a useful design tool that can be used in conjunction with upfront design. Upfront design is great for having an overall coherent design, but there are many finer details that are inevitably discovered at implementation time. That's where I think TDD really shines.
If you think of building software as sculpting, upfront design is like chiseling out the shape of your statue and TDD is like switching to a more precise tool to etch in small details like hair texture or wrinkles. It's not a perfect metaphor. For example, if you start with a smaller chisel (TDD), you'll end up with the same statue just taking you longer to build. But if you do the same with software development, you'll end up with an arm coming out of the statue's thigh.
I totally agree that tests aren't a replacement for thoughtful design, but neither is thoughtful design a replacement for tests.
Thoughtful design can greatly reduce the number of tests you need by reducing the complexity of your software and moving things to compiler-verified declarations. It can also prevent bugs by the sheer virtue of readability.
This is not emphasized in the industry nearly as strongly as it needs to be.
I definitely wouldn't argue with the fact that we have to think what we build, design carefully, etc. However imagine the world with massive test coverage that provides instant regression testing and the world without it. What would you choose?
We do have thorough peer reviews, that's what Phabricator is for and nothing gets in without passing a code review.
Some more comments.
1. True, but you have to hit a combination of a test bug/feature bug to let the feature bug slip through
2. Also true. And we had to deal with those type of bugs. When the area is tricky (transaction log rotation, quorums, etc) we sometime have to the extends of proving things formally.
3. Very true. And scenario based testing will only get you this far. Receiving customer feedback is critical for that. Obviously SAS has advantage here.
4. Everything is a tradeoff. We definitely design/prototype hard features before we build them
I think the right way to put it "Once you put yourself on the scale between chair legs and a space ship, you should choose how you run the engineering process".
I'm not aware of any credible engineering effort that only does that.
1) Your tests are just as likely to have bugs anyway.
Yes, but only with P(test_bug). We see code bugs with P(code_bug), and if there are tests covering all those pieces of code, the final probability of bad things shipping to production is P(test_bug) * P(code_bug)
Obviously, it's an overly optimistic assumption, but tests serve as one additional layer in a "defense in depth".
2) The 'bad' bugs are logic bugs - if your logic is wrong, your tests are wrong, so you are testing the wrong thing.
If your logic is wrong, your architecture is likely to be wrong, too. If your basic assumptions are wrong, your code will be too, no matter what.
3) Users often do things you didn't plan for.
Yes, and architecture isn't going to save you from that either. (It helps, though. Planning to minimize state is a good idea :)
4) [...] will make it so you don't need to spend months re-factoring in the first place.
Which is a great strategy - if you can assume your requirements never change. If you work in a field like that, excellent. Most of us, however, need to deal with a rapidly changing field.
software engineering in general needs more architecture
Good software engineering has architecture. Tests augment it. How much architecture you create up front, how much you create on the fly, and how often it changes depends entirely on the field you work in, and how much resources you can spend on an issue.
Agree. IMO the main benefit/purpose of tests is to ensure you don't unintentionally break something later down the road, when making changes elsewhere. They're there to help you realize that something unexpected has changed.
Secondary, they can help you verify the correctness of certain things that may go unnoticed (produce negative effects elsewhere) otherwise. But it's less useful to spend a lot time/effort on writing tests that verify things that are obvious to see anyway.
I think this is easier said than done. What I believe you're proposing is difficult to enforce. Can you give some examples -- sounds like you're mapping this to the manufacturing domain.
In a nutshell, this is what we do. It's only difficult to enforce if the team doesn't buy into it.
Dev takes a feature request off the list. Reads & understands the Requirement for the feature. If no Formal Requirement and the feature is complex enough to need one, then he meets with stakeholder and creates one. Meet with stakeholder & test person assigned again to review that the requirement as written is now correct, testable and edge cases are taken care of.
Dev now designs the feature. Simple feature doesn't require any formal documentation, but we encourage at least a hallway discussion with someone so the design passes the smell test. Bigger/more important features need written design description and formal review of the design.
Dev implements the feature, desk checks his code along the way for logic errors that the compiler or unit testing might miss and when satisfied, unit tests it.
Code review is held. Possible discussion with tester about aspects the dev may have had difficulty testing, so the tester can focus on those areas. Tester may also need visibility into some hidden aspects (perhaps he wants to know the partial result of a calculation), so dev may have to modify code to log or print that somewhere so it can be verified.
Tester creates test cases, reviews with developer to be sure they make sense and are testing things related to the feature, regression areas, etc.
Tester runs test cases. If they all pass, feature is closed, test documentation is recorded.
Yes, it seems like a lot, but all the reviews and checks mean that fewer bugs get to the point where a tester (or worse, a customer!) will find them. Testing becomes a way to verify that the software is working correctly instead of the way to find the bugs.
Most bugs are due to misunderstanding a customer requirement so we have a focus on finding those misunderstandings or incomplete design problems before they turn into running code.
Agreed. If quality is critical (most OSs, for example, ship with known bugs) then look to embedded systems. You don't send the Mars Rover to Mars with known bugs in the software. Same for heart monitors. And that means waterfall, lots of modelling, simulation, and huge design up front.
Big Design Up Front is the way to go, but with caveats: only design as much as you need to, with the things you know are true or expect to happen. Requirements will change, no matter what, so you have to be flexible.
Yes, it does mean simulation and modeling. Mainly to verify/disprove assumptions and learn how your system will work as early as possible.
I absolutely hate it when I'm reading an interesting article on a startup/company blog and I go to click on their logo/name in the top left corner and it...takes me back to their blog instead of their homepage!
Test (and Design) are part of the development strategy. Depending on what your business goals are, you should optimize Design\Development\Test for it. Sometimes these goals are principals that you want to brand around (e.g. robustness for a DB server product) or tactical decision that adapt to a stage (catching up, learning, leapfrogging, supportability, locking a market). These are scenarios that I've seen (and some of them worked on) over the last years:
* Deliver a strong v1 too late
* Deliver a small v1 that outs a new market and allows competirors to catch up
* Develop a vnext backcompat-focused product and be surpassed by a new\exisiting competitor
* Develop a heavily customer focus product and get trapped in impossible\contradicting requirements
* Deliver a weak v1 and lose market credibility
* Deliver a secret v1 and have no real market for it
If you have a clear view of the business goals, Test becomes less an opinion and more of a resourcing puzzle (still hard though). It's a great world of tradeoffs between layering tests, testability features, customer research\partnership\simulation, fast development, legacy ownership, responsibility distribution, exhaustiveness, documentation, community\customer engagement, reusability of tests, reusability of product code.
I would say it was an initial push on 1.5 months of one engineer with 10% of time of one engineer for maintenance. As the load increases we have to invest a few days here and there to scale the system.
I have a friend who can derive no value from unit tests. He works on a thing that--by no fault of his own, but just by the very nature of physical reality--cannot be automatically tested.
Most blog posts about software quality seem to focus on testing. While testing is certainly a component of a good development process, it's really better to avoid injecting a defect than trying to be sure you execute tests to find it later.