Hacker News new | past | comments | ask | show | jobs | submit login

1. It's trivially checkable for any existing individual paper. You skim the methods or search for "multiple comparisons" or "false discovery rate" or something like that.

2. For papers where this wasn't done, the already-collected data can be reanalyzed. In fact, you can often get correct it without access to the raw data (at least approximately).

3. It means that future papers (where future is somewhere after 2008-9 here) can be done correctly from the get-go; it's not a limitation of the technique or the signal itself.




If you believe that the majority of papers in the field are not incorrect, what are the assumptions behind your estimation of the numbers?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: