Hacker News new | past | comments | ask | show | jobs | submit login

This is exactly what I would have said: this sort of research isn't 'human subjects research' and therefore is not covered by an IRB (whose job it is to prevent the university from legal risk, not to identify ethically dubious studies).

It is likely the professor involved here will be fired if they are pre-tenure, or sanctioned if post-tensure.




How in the world is conducting behavioral research on kernel maintainers to see how they respond to subtly-malicious patches not "human subject research"?


In the restricted sense of Title 45, Part 46, it's probably not quite human subject research (see https://www.hhs.gov/ohrp/regulations-and-policy/regulations/... ).

Of course, there are other ethical and legal requirements that you're bound to, not just this one. I'm not sure which requirements IRBs in the US look into though, it's a pretty murky situation.


How so?

It seems to qualify per §46.102(e)(1)(i) ("Human subject means a living individual about whom an investigator [..] conducting research: (i) Obtains information [...] through [...] interaction with the individual, and uses, studies, or analyzes the information [...]")

I don't think it'd qualify for any of the exemptions in 46.104(d): 1 requires an educational setting, 2 requires standard tests, 3 requires pre-consent and interactions must be "benign", 4 is only about the use of PII with no interactions, 5 is only about public programs, 6 is only about food, 7 is about storing PII and not applicable and 8 requires "broad" pre-consent and documentation of a waiver.


rather than arguing about the technical details of the law, let me just clarify: IRBs would actively reject a request to review this. It's not in their (perceived) purview.

It's not worth arguing about this; if you care, you can try to change the law. In the meantime, IRBs will do what IRBs do.


If the law, as written, does actually classify this as human research, it seems like the correct response is to sue the University for damages under that law.

Since IRBs exist to minimize liability, it seems like that would be that fastest route towards change (assuming you have legal standing )


Woah woah woah, no need to whip out the litigation here. You could try that, but I am fairly certain you would be unsuccessful. You would be thrown out with "this does not qualify under the law" before it made it to court and it wouldn't have much bearing except to bolster the university.


It obviously qualifies and the guy just quoted the law at you to prove it.

Frankly universities and academics need to be taken to court far more often. Our society routinely turns a blind eye to all sorts of fraudulent and unethical practices inside academia and it has to stop.


That's still 10 thousand words you're linking to…

I had a look at section §46.104 https://www.hhs.gov/ohrp/regulations-and-policy/regulations/... since it mentioned exemptions, and at (d) (3) inside that. It still doesn't apply: there's no agreement to participate, it's not benign, it's not anonymous.


If there's some deeply legalistic answer explaining how the IRB correctly interpreted their rules to arrive at the exemption decision, I believe it. It'll just go to show the rules are broken.

IRBs are like the TSA. Imposing annoyance and red tape on the honest vast-majority while failing to actually filter the 0.0001% of things they ostensibly exist to filter.


are you expecting that science and institutions are rational? If I was on the IRB, I wouldn't have considered this since it's not a sociological experiment on kernel maintainers, it's an experiment to inject vulnerabilities in a source code. That's not what IRBs are qualified to evaluate.


> it's an experiment to inject vulnerabilities in a source code

I'm guessing it passed for similar reasoning, along with the reviewers being unfamiliar with how "vulnerabilities are injected." To get the bad code in, the researcher needed to have the code reviewed by a human.

So if you rephrase "inject vulnerability" as "sneak my way past a human checkpoint", you might have a better idea of what they were actually doing, and might be better equipped to judge its ethical merit -- and if it qualifies as research on human subjects.

To my thinking, it is quite clearly human experimentation, even if the subject is the process rather than a human individual. Ultimately, the process must be performed by a human, and it doesn't make sense to me that you would distinguish between the two.

And the maintainers themselves express feeling that they were the subject of the research, so there's that.


Testing airport security by putting dangerous goods in your luggage is not human experimentation. Testing a Bank's security is not human experimentation. Testing border securiry is not.

What makes people revieing linux kernel more 'human' than any of the above?


Tell that to the person on the hook if or when they get caught.


It's not an experiment in computer science; these guys aren't typing code into an editor and testing what the code does after they've compiled it. They're contributing their vulnerabilities to a community of developers and testing whether these people accept it. It is absolutely nothing else than a sociological experiment.


This reminds me of a few passages in the SSC post on IRBs[0].

Main point is that IRBs were created in response to some highly unethical and harmful "studies" being carried out by institutions thought of as top-tier. Now they are considered to be a mandatory part of carrying out ethical research. But if you think about it, isn't outsourcing all sense of ethics to an organization external to the actual researchers kind of the opposite of what we want to do?

All institutions tend to be corruptible. Many tend to respond to their actual incentives rather than high-minded statements about what they're supposed to be about. Seems to me that promoting the attitude of "well an IRB approved it, so it must be all right, let's go!" is the exact opposite of what we really want.

All things considered, it's probably better to have something there than nothing. But you still have to be responsible for your own decisions. I bamboozled our lazy IRB into approving our study, so I'm not responsible for it being obviously a bad idea, just isn't good enough.

If you think about it, it's actually kind of meta to the code review process they were "studying". Just like IRBs, Code review is good, but no code review process will ever be good enough to stop every malicious actor every time. It will always be necessary to track the reputation of contributors and be able to mass-revert contributions from contributors later determined to be actively malicious.

[0] https://slatestarcodex.com/2017/08/29/my-irb-nightmare/


I guess I have a different perspective. I know a fair number of world class scientists; like, the sort of people you end up reading about as having changed the textbook. One of these people, a well-known bacteriologist, brought his intended study to the IRB for his institution (UC Boulder), who said he couldn't do it because of various risks due to studying pathogenic bacteria. The bacteriologist, who knew far more about the science than the IRB, explained everything in extreme detail and batted away each attempt to shut him down.

Eventually, the IRB, unhappy at his behavior, said he couldn't do the experiment. He left for another institution (UC San Diego) immediately, having made a deal with the new dean to go through expedited review. It was a big loss for Boulder and TBH, the IRB's reasoning was not sound.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: