People keep saying this all the time, but apart from the fact that nobody can agree on what an "integration test" is (because there's almost always some part of the application flow that you're stubbing out), it just becomes immediately apparent in a code base of sufficient size that "just use integration tests for everything" is only possible if you severely under-test (which usually includes thing like not properly testing for error conditions etc.).
What? Nobody is advocating for integration tests to the exclusion of unit tests.
but apart from the fact that nobody can agree on what an "integration test" is
Not being precise doesn't invalidate a guideline. The ideas that "stubbing less is better" and "testing functionality end to end is good bang-for-buck" aren't crappy because people don't agree on the details.
it just becomes immediately apparent in a code base of sufficient size that "just use integration tests for everything" is only possible if you severely under-test
Your parent comment said "If we're building a rocket ship, you need both granular testing and coarse testing." Nobody is advocating for integration tests to the exclusion of unit tests. If you write a hash table or a CSV parser, yes you should unit test it.
But for most application-ish functionality you should reach for integration tests first. For example, testing direct message functionality in an app, checking that after a send there's a notification email queued and the recipient inbox endpoint says there's 1 unread will get you really far in 20 lines of code. Is it exhaustive? Of course not. But the simplicity is a huge virtue.
if you severely under-test (which usually includes thing like not properly testing for error conditions etc.)
I advocate "default to integration testing for application functionality". Those focused on unit tests often mock exactly the things most likely to break: integration points between systems. "Unit" tests of systems are often really verbose, prescriptive about internal state, and worst of all don't catch the bits that actually break.
> Nobody is advocating for integration tests to the exclusion of unit tests.
Oh, I got here just now, but well, let me advocate it. (Well, not all unit tests, but most of them.)
The ideal layer to test is the one that gives you visible behavior. You should test there, and compare the behavior with the specification.
Invisible behavior is almost never well defined and as a consequence any test there has a very high maintenance cost and low confidence results. Besides, it has a huge test area that comes with the large freedom of choice there. It is a bad thing to test in general.
Now, of course there are exceptions where the invisible behavior is well defined or where it has a lower test area than the visible one. On this case it's well worth testing there. But those are the exception.
> I advocate "default to integration testing for application functionality". Those focused on unit tests often mock exactly the things most likely to break: integration points between systems. "Unit" tests of systems are often really verbose, prescriptive about internal state, and worst of all don't catch the bits that actually break.
You're writing this as a comment to an article that explains exactly how to write unit tests that aren't brittle and avoid mocking.
"Only integration tests" vs. "brittle unit tests" is a false dichotomy.
I wholeheartedly agree that people can't agree on what "unit test" means precisely, either (even though I think your specific examples are a bit disingenuous). In particular, classical and London-school / mockist TDD have rather different definitions of it.
That's why it's important to have a well-rounded test strategy with different types of tests that have different purposes, instead of using some blanket approaches.