Hacker News new | past | comments | ask | show | jobs | submit login

Quite often, yes. That's why I prefer integration tests.



Indeed. To many tests are just testing nothing other than mocks. That goes for my coworkers directly and for their Copilot output. They’re not useful tests, they are thing to catch actual errors, they’re maybe useful as usage documentation. But in general, they’re mostly a waste.

Integration tests, good ones, are harder but far more valuable.


> To many tests are just testing nothing other than mocks

Totally agree, and I find that they don't help with documentation much either, because the person that wrote it doesn't know what they're trying to test. So it only overcomplicates things.

Also harmful because it gives a false sense of security that the code is tested when it really isn't.


This has been my approach in the past that only certain parts of the code are worth unit testing. But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.


> But given how much easier unit tests are to write now with AI I think the % of code worth unit testing has gone up.

I see the argument, I just disagree with it. Test code is still code and it still has to be maintained, which, sure "the AI will do that" but now theres a lot more that I have to babysit.

The tests that I'm seeing pumped out by my coworkers who are using AI for it just aren't very good tests a lot of the time, and honestly encode too much of the specific implementation details of the module in question into them, making refactoring more of a chore.

The tests I'm talking about simply aren't going to catch any bugs, they weren't used as an isolated execution environment for test driven development, so what use are they? I'm not convinced, not yet anyway.

Just because we can get "9X%" coverage with these tools, doesn't mean we should.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: