Hacker News new | past | comments | ask | show | jobs | submit login

Recently, I thought about a process I call "Test Coverage Driven Testing". It is similar to TDD, but more adapted to when we write tests after the code (you know you do too, at least occasionally, don't lie).

It goes roughly like this:

- write one integration test for the "happy path".

- using some test coverage report, identify untested cases.

- write unit test for those.

I find I helps me find a good balance between time invested writing tests and benefits reaped from those tests.

Do you have a similar process?




> - using some test coverage report, identify untested cases.

From what I understand, that is not reliable; a line of code can be “covered” – i.e. executed – but still not be tested under all circumstances. If you have pre-existing code you need to write tests for, what you need is probably a tool for mutation testing.


This is right. This is not a reliable approach since merely calling a function does not mean I tested all of its edge cases. And if my code depends on a 3rd party lib, I might not even have access to this lib's source.

OTOH, aiming for 100% reliability and coverage is waay too expensive for most business app. This is not like embedded software for a plane, where lives are at risk. I usually aim for a 80-90% coverage of my own code, plus a regression test for each bug actually reported.

And by the way, if you really want zero error (planes, trains, cars etc), TDD is not enough anyway.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: