Heh, when you are working on hardware, you usually build a real test device and yes, actually cause real hardware faults. Mocks or tests will not prepare you for the real thing; as the hardware fault you detect is usually just the surface of the problem. Let's examine a practical example where a disk becomes full. Suddenly, file creation will fail, as will writes, yet, how do you handle that? In isolation, you may mock that condition out so you can test it. You handle the error in isolation, but this is actually quite dangerous. Your application should fail explosively -- don't be like some popular databases that just continue as though nothing happened at all, corrupting its state so it will never start again.
Generally, if you can detect a hardware fault in your code, crash unless you know for certain that you can actually recover from it -- meaning you know what the problem is (can somehow detect the difference between a missing disk or a missing file). 99.9% of the time, you cannot recover from hardware issues with software, so it is pointless to actually test that case. Please, for the love of working software, just crash so the state doesn't get corrupted instead of trying to overengineer a solution.
> Heh, when you are working on hardware, you usually build a real test device
All software works on hardware. Your web application doesn't need a test device, though. The hardware is already tested. You can treat it as an integration point. But even if you did create a test device for whatever reason, that's a mock! Which you say is to be avoided, and that there are better ways, without sharing what those better ways are...
> Please, for the love of working software, just crash so the state doesn't get corrupted instead of trying to overengineer a solution.
While you're not wrong, you need to test to ensure that it actually crashes. All defined behaviour needs to be tested, and you have defined behaviour here.
> Which you say is to be avoided, and that there are better ways, without sharing what those better ways are...
That's because it would better fit in a book than an HN comment, not because I don't want to answer. Basically the gist is to write "obviously correct" code that doesn't need to be tested, along with an architecture that lends itself to being testable without mocks.
Most people tend to write an interface and then inject a concrete type that could also be a mock. I've seen tests written this way that need to mock out 20-60 things just to test the one thing they want to test.
In most web frameworks I've worked with, this is mostly unavoidable as most frameworks provide dependency injection that is natural to mock.
If you aren't using a framework that has an opinion on how things should work, mocks can be avoided completely through different techniques, such as test harnesses, replacing dependencies with alternative implementations (such as in-memory queues instead of cloud services, sqlite instead of heavy databases, etc), etc. Sometimes you don't have any choice but to not use mocks, for example, distributed systems usually avoid mocks for certain kinds of tests because they simply can't be emulated very well -- either due to latency, network partitioning, or network failures that are just too numerous to mock out (similar to the disk issue I was referring too earlier). You don't know if a node is down, or the cable got cut, and need to behave appropriately to avoid split-brain scenarios. In these cases, test harnesses that can emulate specific scenarios are much better.
> Basically the gist is to write "obviously correct" code that doesn't need to be tested
I don't see how that follows. The purpose of testing is to document the code for the understanding of future developers, not to prove correctness. The only 'correctness' a test proves is that the documentation is true. Which is still incredibly useful, as I am sure you are painfully aware if you have ever dealt with legacy forms of documentation (e.g. plain text files, Word documents, HTML, etc.) that quickly become out of date, but is not a statement about the code itself.
> such as test harnesses, replacing dependencies with alternative implementations (such as in-memory queues instead of cloud services, sqlite instead of heavy databases, etc), etc.
These are all mocks, ultimately. Some desperately try to give them different names, but it is all the same at the end of the day.
> The purpose of testing is to document the code for the understanding of future developers, not to prove correctness.
Hmm. I've never seen tests with that goal in mind, except for behavioral tests that test the acceptance critera.
> as I am sure you are painfully aware if you have ever dealt with legacy forms of documentation [...] that quickly become out of date
I have, but allowing that to happen is a culture-issue, not something that is guaranteed to happen. When I open PRs to open source software, I always include a PR to the docs if it changes anything. At work, updating the docs is part of the default acceptance criteria and is usually the thing we do before writing any code, and goes through a PR process just like the code. But, we service enterprise customers, so we aren't going to be giving them code or tests to understand how to use our product.
> These are all mocks, ultimately.
This is a software field and there are specific words with specific meaning; trying to shoehorn things that aren't those things to generalize a meaning is acceptable when teaching. It isn't acceptable when working on those things. In other words, I would accept this if trying to explain the concept to a junior engineer, but not from a senior engineer to a senior engineer.
Then you've never seen a test, I guess. That is the only goal they can serve, fundamentally.
> I have, but allowing that to happen is a culture-issue, not something that is guaranteed to happen.
Mistakes are guaranteed to happen given enough output/time. No matter how hard you try, you are going to make a mistake at some point. It is the human condition. In the olden days one might use a proofreader to try and catch the mistakes, but with the advent of testing a computer can do the "proofreading" automatically, leaving the human effort to be pointless.
Maybe in the age of LLMs we can go back to writing documentation in "natural" language while still using machines to do the validation work, but then again if you write code you probably would prefer to read code. I know I would! The best language is the one you are already using. Having to read code documentation in English is a horrible user experience.
> This is a software field and there are specific words with specific meaning
Sure, but in this case you won't find any real difference in meaning across the vast array of words we try to use here. The desperate attempts to try and find new words is to broach silly soundbites like "mocks are a code smell", so that one can say "I'm not mocking, I'm flabbergasting!", even though it is the same thing...
> Then you've never seen a test, I guess. That [,not to prove correctness,] is the only goal they can serve, fundamentally.
I cannot wrap my head around this statement. It's literally in the name: "test" as in to prove something works... hopefully as designed.
> Mistakes are guaranteed to happen given enough output/time. No matter how hard you try, you are going to make a mistake at some point.
Yep, and they do. Its really easy to figure out which one is right: if the docs say that something happens, it happens. If the code doesn't do what the docs say, the code (and the tests) are wrong; and not the other way around.
> Having to read code documentation in English is a horrible user experience.
It's the difference between intention and action! I worked with a guy who opened PRs with totally empty descriptions. It was annoying. When I was reviewing his code, I had to first figure out his intention before I could understand why there was a PR in the first place. Was he fixing a bug, adding a new feature, or just writing code for the hell of it? ... nobody knew. Then, when you spotted a bug, you had to ask if it was a bug or on purpose, because you didn't know why the code was there in the first place.
Documentation is that living PR description. It doesn't just tell you WHAT exists, but WHY it exists, what purpose it serves, why that weird little line is the way it is, etc., etc.
> It's literally in the name: "test" as in to prove something works...
The documentation is what is under test. It proves that what is documented is true. It does not prove that the implementation works. This should be obvious. Consider the simplest case: A passing test may not even call upon the implementation.
I have most definitely seen that in the wild before! More times than I wish I had. This is why TDD urges you to write tests first, so that you can be sure that the test fails without implementation support. But TDD and testing are definitely not synonymous.
> Its really easy to figure out which one is right: if the docs say that something happens, it happens.
Under traditional forms of documentation, you don't have much choice but to defer to the implementation. With modern documentation that is tested, typically the documentation is placed above the implementation. Most organizations won't deploy their code until the documentation is proven to be true. The implementation may not work, but the documentation will hold.
> I worked with a guy who opened PRs with totally empty descriptions.
I'm not sure PRs fit the discussion. PRs document human processes, not code. Human processes will already typically be in English (or similar natural language), so in the same vein the best language is the one you are already using. That is not what we were talking about earlier; but, granted, does do a good job of solidifying the premise.
Generally, if you can detect a hardware fault in your code, crash unless you know for certain that you can actually recover from it -- meaning you know what the problem is (can somehow detect the difference between a missing disk or a missing file). 99.9% of the time, you cannot recover from hardware issues with software, so it is pointless to actually test that case. Please, for the love of working software, just crash so the state doesn't get corrupted instead of trying to overengineer a solution.