Hacker News new | past | comments | ask | show | jobs | submit login

It's actually worse than that, if a lot of studies are being done:

Let's say that you only publish when you discover an effect with a p-value of better than 0.05 -- that is, when you believe that, if the effect weren't real, then the probability of observing an effect at least as extreme as the one you got has less than a 5% chance of happening. This is pretty typical.

Let's also say that you and 19 other groups are studying an effect that isn't real: the hypothesis that meditating on pink unicorns will get rid of skin cancer.

By (perfectly reasonable) chance, 19 of your studies reject the Pink Unicorn Hypothesis with p-value = 0.05, and one accepts it -- i.e., one group gets a result that should have happened 1/20 times or less if there is no Pink Unicorn effect.

Since the first 19 groups are silent, and only one group publishes, the only thing we see is the exciting announcement of a possible new skin cancer cure, with no hope for a meta-study that notices that this actually the expected result given the null hypothesis.

So yeah. That's bad.




In this case, those other 19 groups ought to respond publicly fairly quickly--"we tried that too, and it didn't work for us."


No they usually don't. Because the "didn't work for us" is usually not conclusive proof of the contrary.

It does (rarely) happen in Physics where everything is expected to be repeatable, and results from one experiment carry over to similar experiments. It almost never happens in medicine, where the bar of acceptance of a hypothesis is already ridiculously low.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: