But note that not only are there no silver bullets, as sibling comments note kernels (or anything that touches hardware, and "the real world" (ex. getting back packets with random reorders and duplicates and drops) really) have trouble using unit testing. And even in those cases where it might work it's not universally applied, I think.
The sibling is dead wrong though. It is trivially easy to concoct any messed-up circumstances you wish to imagine, in a unit test, no matter how hard they would be to reproduce in reality.
I don't think that's true unless the entire system is purely functional (i.e. all functions take inputs and produce outputs and never touch anything resembling shared state). Ex. how would you make a unit test to check the behavior of 2 threads writing to a single memory buffer from different CPU cores? I could easily be missing a trick, but the only options I can see are integration tests, not unit tests.
Don't go putting words in my mouth. Linux should have unit tests, does have unit tests, and probably should have more unit tests. They are a tool that works well for some cases and not others, and many parts of a kernel are cases where unit tests are not a useful tool.
Code that does I/O has a lot of interplay that's hard to replicate and impossible to cover entirely. The physical world is nothing but shared mutable state.
Yes, and that's what automated tests are for. They "replicate" specific conditions and make it possible to cover everything. That's what unit tests are. This has nothing to do with the physical world.
By passing it faked hardware. Yes, you have to write your APIs so they are testable. Yes, it is virtually impossible to retrofit unit tests into an old, large code base that was written without regard to testability. But no, it is not difficult at all to fake or mock hardware states in code that was designed with some forethought.
That may hold for a trivial device or a perfectly spec compliant device. However, the former is not interesting and the later does not exist. I agree that more test coverage would be beneficial, but I think your heavily downplaying the difficulty of writing realistic mock hardware.
Do you have experience doing this in C/C++? There are a bunch of things about the language models for both (e.g. how symbol visibility and linkage work) that make doing DI in C/C++ significantly harder than in most other languages. And even when you can do it, doing this generally requires using techniques that introduce overhead in non-test builds. For example, you need to use virtual methods for everything you want to be able to mock/test, and besides the overhead of a virtual call itself this will affect inlining and so on.
This doesn't even consider the fact that issues related to things like concurrency are usually difficult to properly unit test at all unless you already know what the bug is in advance. If you have a highly concurrent system and it requires a bunch of different things are in some specific state in order to trigger a specific bug, of course you CAN write a test for this in principle, but it's a huge amount of work and requires that you've already done all the debugging already. Which is why developers in C/C++ rely on a bunch of other techniques like sanitizer builds to test issues like this.
Right, doing interfaces that support DI would also force Linux to grow up and learn how to build and ship a peak-optimized artifact with de-virtualization and post-link optimization and all the goodies. It would be a huge win for users.
The fact that it would be hard to test certain edge cases does not in any way excuse the fact that the overwhelming bulk of functions in Linux are pure functions that are thread-hostile anyway, and these all need tests. The hard cases can be left for last.