Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I view tests as experiments: they compare our theory of how the code should work, against the reality of how our code actually works.

Writing tests for the "happy path" is like confirmation bias: we only go looking for things which reinforce what we already think. Good experiments should challenge our assumptions.

One good way to write tests is when we're debugging: the bug itself falsifies our theory, so we can use that as a test (AKA a regression test). This requires we can reliably reproduce the bug, but that's usually an important first step when debugging.

Once we've got our regression test, we might wonder how this situation arose. This is a helpful way to tease out the assumptions we're making about the code, and turn them into tests. For example, the buggy result might be calculated from some intermediate values 'foo' and 'bar', but the bug makes no sense because 'foo' and 'bar' always satisfy certain properties that would prevent the bug. Well there's two new tests we can write! We can keep working back like this until we find the cause of the bug and fix it. I like this method because we end up with tests that correspond to those features of the code that we found ourselves doubting. That's usually a good sign that those tests are worth having.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: