Hacker News new | past | comments | ask | show | jobs | submit login

> [T]he process of writing the inner part of the table test seems to nudge you to think about all the permutations and combinations of what you're testing can be. It leads to good tests and helps you reason about your inputs.

I think this is not inherent. If you need to test all permutations and combinations, do the exhaustive property testing instead. Table tests (ideally) represent a pruned tree of possibilities that make intuitive sense to humans, but that pruning itself doesn't always neatly fit into tables. But otherwise I generally agree.

When I write a specific set of tests I first start with representative positive cases, and I think you would call them as a storybook test (because it serves both as a test and an example). Then I map out edge and negative cases, but with a knowledge of some internal working in order to prune them. For example if some function `f(x, y)` requires both `x` and `y` to be a square number, I can reasonably expect that such check happens before anything else so we can only feed square numbers when we are confident enough that checks are correct. And then I optionally write more expensive tests to fill any remaining gaps---exhaustive testing, randomized testing, property testing and so on.

Note that these tests are essentially sequential because each test relies on assumptions that are thought to be verified by earlier tests. Some of them can make use of tables, but the entire test should look like a gradient of increasing complexity and assurance in my opinion. Table tests are only a good fit for some portion of that gradient. And I don't like per-case subtests suggested in the OP---it just feels like a workaround over Go's inability to provide more contexts on panic.




Btw, those tables making intuitive sense to humans sounds like a strength, but it's also a big weakness. Humans aren't always good with thinking about corner cases.

Property based testing is one way to get past this limitation. (As you also imply in the later part of your comment.)


One of difficulties with property testing is that humans are notably bad at specifying good enough properties. A canonical example is sorting: if your property is that every consecutive element should be ordered, an implementation that duplicates some element in place of others won't be caught. We can't always come up with a complete property, but intuitive cases can improve otherwise incomplete properties.


Yes, coming up with good properties is a skill that requires practice.

I find that in practice, training that skills also helps people come up with better designs for their code. (And people are even more hopeless at coming up with good example-based test cases.)

Of course, property-based testing doesn't mean you have to swear off specifying examples by hand. You can mix and match.

When you are just starting out with property based testing, as a minimum you can come up with some examples, but then replace the parts that shouldn't matter (eg that your string is exactly 'foobar') with arbitrary values.

That's only slightly more complicated than a fixed example, and only slightly more comprehensive in testing; but it's much better in terms of how well your tests are documenting your code for fellow humans. (Eg you have to be explicit about whether the empty string would be ok.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: