> This event does rise awareness of what sophisticated attacker group might try to do to kernel community.
The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)
That's factually incorrect. The arguments over what constitutes proper code reviews continues to this day with few comprehensive studies about syntax, much less code reviews - not "do you have them" or "how many people" but methodology.
> it appears very questionable what scientific knowledge is actually gained here
The knowledge isn't from the study existing, but the analysis of the data collected.
I'm not even sure why people are upset at this, since it's a very modern approach to investigating how many projects are structured to this day. This was a daring and practical effort.
The limits of code review are quite well known, so it appears very questionable what scientific knowledge is actually gained here. (Indeed, especially because of the known limits, you could very likely show them without misleading people, because even people knowing to be suspicious are likely to miss problems, if you really wanted to run a formal study on some specific aspect. You could also study the history of in-the-wild bugs to learn about the review process)