No, you can't get away with this semantic dodge, because Raymond numbered what he believed were the most important lessons he was imparting, and the one corresponding to Linus' Law is:
8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
He even attempted an axiomatic explanation:
Maybe it shouldn't have been such a surprise, at that. Sociologists years ago discovered that the averaged opinion of a mass of equally expert (or equally ignorant) observers is quite a bit more reliable a predictor than the opinion of a single randomly-chosen one of the observers. They called this the Delphi effect. It appears that what Linus has shown is that this applies even to debugging an operating system—that the Delphi effect can tame development complexity even at the complexity level of an OS kernel.
I was a developer at the time (still am), and if I'm remembering correctly, ESR was active in Slashdot and a few other places I hung out.
I took ESR's claim about bugs to imply that the quality of open source software would be greater than that of proprietary software because the number of people who had access to the code would inevitably result in less bugs. A lot of the discussions around C&B at the time were about software quality. I don't think anyone expected there to be zero bugs, just that there would be fewer.
I am not convinced it turned out that way, but that's an interesting discussion for another thread.
It seems at least plausible an argument I read years ago, maybe in Raymond Chen's blog, that in reality the only thing that makes a difference is paying people to look for bugs and fix them because people don't like doing that that much.
Well, I'm baffled then. From where I'm sitting that point 8 is clearly talking about what happens after a bug is discovered, and not about discovering bugs.
The longer paragraph doesn't seem to contradict the notion either. My impression (based on the "How Many Eyeballs Tame Complexity" chapter) is that Raymond thought that "debugging" means "fixing bugs".
If I were criticising this part of essay, I'd say the main weakness is that the things Raymond thought of as "taming complexity" weren't really addressing the hard problem of reducing the number of bugs.
As I mention upthread, a lot of the talk at the time was about overall quality of the software. So an interpretation that the total number of bugs (known and unknown) would be reduced is well aligned with this outlook.
If the objective is just to fix bugs that have been found, well, that doesn't really feed into this narrative. Also, ESR was making many claims about the efficacy of OSS, and limiting the scope to bugs already discovered would not really align with the rest of the goings-on at the time.
> 8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
The key word here is characterized. That word is not equivalent to found.
Security vulnerabilities are unique in that they matter despite being unknown.
Other bugs are only important because of their direct impact on users. It's not unreasonable to take everything here and apply it to known bugs, and not to unknown vulnerabilities.
In professional engineering, you characterize the behavior after someone finds and reports it. Or you characterize a flow transducer, or you characterize gas circuit compliance. It’s the thing you do once you know there’s something to characterize.
OK. Well in the variety of work I do, which I suppose isn’t professional by this standard, typically the person who finds a bug and the person who describes it are one and the same.
>> Given a large enough beta-tester and co-developer base...
That's a qualifier for what follows it. Seems true enough to me. If there are enough developers to notice bugs, they will probably be found and fixed quickly.
The argument here is that the pool of "developers" is essentially unlimited for open-source software. In principle I suppose that's indisputable, but in practice is it true?
8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.
He even attempted an axiomatic explanation:
Maybe it shouldn't have been such a surprise, at that. Sociologists years ago discovered that the averaged opinion of a mass of equally expert (or equally ignorant) observers is quite a bit more reliable a predictor than the opinion of a single randomly-chosen one of the observers. They called this the Delphi effect. It appears that what Linus has shown is that this applies even to debugging an operating system—that the Delphi effect can tame development complexity even at the complexity level of an OS kernel.