Hacker News new | past | comments | ask | show | jobs | submit login

This paper?

"I think the authors of the above-linked paper owe us all an apology. We wasted time and effort discussing this paper whose main selling point was some numbers that were essentially the product of a statistical error."

https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...




I'm skeptical whenever I see a teardown like this which fails to mention that the offical case counts have all the same problems. Maybe we should dismiss this paper - but that means committing ourselves to radical skepticism about the prevalence, not going back and believing the numbers printed in the news.


PCR tests are asymmetric. Positive result almost certainly means that this person has the virus, but negative result might mean a number of things. Bad swab (less than 3000 virus copies), temporary remission (Korea has at least 160 such cases by now), etc.

Test kit availability adds another layer. At some point New York had 200 confirmed cases, and two weeks later 400 deaths, yet CFR is unlikely to be as high as 200%.

So yes, all statistics should be taken with a grain of salt, but magnitude and direction of that grain may be different.


While Gelman does point out to issues that may invalidate this paper (test specificity, mostly, and noisy weights, potentially) there does not seem to be any "cherry picking" involved.


I would say even if the study is merely statistical error, using it to give an implausibly low estimate of the IFR that just happens to fit the agenda of one author qualifies as cherry picking. A priori you can not use a test with high false positive rate to do a study like this unless the prevalence is much higher than the false positive rate.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: