If you think 100% code coverage with unit tests is bad you should see what happens when it's done with integration tests. I'm refactoring some tests now that were written with code coverage in mind, apparently with bonuses tied to the coverage stat. I think I've seen every testing anti-pattern possible in just this one group of ~25 tests.
There's the developers not understanding the difference between unit and integration tests. Both are fine, but integration tests aren't a good tool to hit corner cases.
Many of the tests don't actually test what they pretend to. A few weeks ago I broke production code that had a test specifically for the case that I broke, but the test didn't catch it because the input was wrong, but the coverage was there.
Most of the tests give no indication of what they're actually testing, or a misleading indication, you have to divine it yourself based on the input data, but most of that is copy/pasted, so much of it isn't actually relevant to the tests (I suspect it was included in the overall test coverage metric).
The results of the code defined the test. They literally ran the code, copied the file output to the "expected" directory and use that in future comparisons. If the files don't match it will open a diff viewer, but a lot of things like order aren't deterministic so the diff gives you no indication of where things went wrong.
Many tests succeed, but for the wrong reason, they check failure cases but don't actually check that the test failed for the right reason.
Some tests "helpers" are actually replicating production code, and the tests are mostly verifying that the helpers work.
Finally, due to some recent changes the tests don't even test production code paths. We can't delete them because it will reduce code coverage, but porting them to actually test new code will take time they aren't sure the want to invest.
Wow. Yeah, that sounds awful. You're absolutely right that pursuing 100% coverage in integration tests is bad too, perhaps even worse. I just haven't seen that in my own direct experience. Having too few requirements around integration tests seems like a far more common problem than having too many.
There's the developers not understanding the difference between unit and integration tests. Both are fine, but integration tests aren't a good tool to hit corner cases.
Many of the tests don't actually test what they pretend to. A few weeks ago I broke production code that had a test specifically for the case that I broke, but the test didn't catch it because the input was wrong, but the coverage was there.
Most of the tests give no indication of what they're actually testing, or a misleading indication, you have to divine it yourself based on the input data, but most of that is copy/pasted, so much of it isn't actually relevant to the tests (I suspect it was included in the overall test coverage metric).
The results of the code defined the test. They literally ran the code, copied the file output to the "expected" directory and use that in future comparisons. If the files don't match it will open a diff viewer, but a lot of things like order aren't deterministic so the diff gives you no indication of where things went wrong.
Many tests succeed, but for the wrong reason, they check failure cases but don't actually check that the test failed for the right reason.
Some tests "helpers" are actually replicating production code, and the tests are mostly verifying that the helpers work.
Finally, due to some recent changes the tests don't even test production code paths. We can't delete them because it will reduce code coverage, but porting them to actually test new code will take time they aren't sure the want to invest.
/end rant