Hacker News new | past | comments | ask | show | jobs | submit login

This is so important but even most professional software developers and testers don't seem to understand it. Tests are not supposed to break on every little change to the code, or even sweeping changes in implementation. If they do, they're not very good test.

https://testing.googleblog.com/2013/08/testing-on-toilet-tes...

https://medium.com/expedia-group-tech/test-your-tests-f4e361...




I disagree. You are basically saying that only end to end tests are very good tests.

There are different reasons to have tests.

Unit tests often break even during small refactorings but are great to quickly cover of lot of edge cases of some complex piece of code. They also quickly point to what exactly is wrong.

End to end tests on the other hand should be almost immune to refactoring problems, but they are very slow and often it is tricky to find where the bug is or even if there is a bug at all in the application - in those tests bugs are very often in test code.


There are plenty of non-end-to-end interfaces that are well defined so tests don't break on changes and still encapsulate complex and important code. There is somebody on this thread talking about one that calculates the costs on a contract.

Tests for those are unit tests, and fit the GP's description. And those are the best units to test, often you don't want to unit test anything else.

Of course, YMMV. For example, on some languages you must test if the data has the correct structure and the functions have the correct interface, what is best done by unit tests, and those tests will break often. Some problems are so complex that you must test intermediate steps, otherwise you will never get it right; those tests also break often, but when you need them, you need them.


If refactoring is causing you to struggle to fix the unit tests, then what you have really aren’t unit tests.

Unit tests should be cheap to write and cheap to reimplement. Otherwise GP’s right and the tests end up shackling you to an implementation, instead of defining the requirements programmatically.


Just stop trying to fix unit tests. All of this noise about test bloat and the pain of refactorings goes away if you make one mindset change: feel free to delete tests.

If you make a change and a unit test breaks, do your best to make sure it’s not an actual defect. If it isn’t and fixing the test looks like more work than you want, then delete the test. Maybe make a new one to cover a similar case if that makes sense.

“Less tests” is a really easy state to get to. Just highlight and delete. If I think having more tests is good and you think that it’s bad because you don’t want the tests there if they break, that’s not a problem! We can get back to the way you want things in less than a minute.

Someone writing a test is not a sworn oath to preserve it forever. It’s not even like a public method. No one is using it.


There's an inflection point with unit tests where a good description, a good matcher, and a small number of preconditions (mostly handled by nested test suites) make it easier to rewrite the test than to fix it. When I say 'cheap' tests, these are what I mean. Tests you can treat like livestock instead of pets.

The sentiment of the test may still apply, but the implementation details mean you're testing, for instance, for empty set using a different strategy. Block delete and start over.

Or the feature you're adding is meant to allow us to deal with empty set: let our customers put things in their shopping cart even though we don't have any shipping addresses for them. In that case we 'broke' that requirement, and just delete the test and move on.

But getting cheap tests and high coverage is a bit of an art, often learned via the school of hard knocks. If you let too many requirements leak up into integration tests, you may find you have many, many tests to fix when a "never" or "always" changes to "occasionally", "mostly". Some people get brave and delete these tests, and discover Chesterton's Fence later when someone yells at them for deleting regression tests, because the test described 2 scenarios it was asserting, but it surreptitiously asserted 2 more, one of which the New Guy just broke but got a green build.

With a unit test it's tenable to make a test that looks like it checks one rule while actually checking only one rule.


See, I think your example of breaking the thing is where things are tricky. I know you’re not advocating for it, but I feel like a lot of people get burned by an experience like that. They think “Someone wrote a test. Then it broke. Then someone deleted it or fixed it wrong. Then something else broke and no one noticed because the test was deleted. Therefore, don’t write too many tests.”

Isn’t that last step super weird? I don’t see how not writing tests would have prevented the regression. At most vague hand wavey “people would be more careful” which I don’t buy. The problem was breaking a dependency. Tests are not themselves dependencies, they are an automated stand-in for them.


As always, all the extremes are wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: