Hacker News new | past | comments | ask | show | jobs | submit login

>"This is deeply wrong. A "true positive" in this context would mean that the resting brain activity of multiple subjects is actually correlated with arbitrary length, randomized, moving test windows."

This is the same misconception I have been trying to dispel. The null hypothesis is not the inverse of that "layman's" statement, it is a specific set of predicted results calculated in a very specific way. It is a mathematical statement, not an English prose statement.

In this case, apparently one part of this calculation (this assumption about autocorrelation, whatever it is doesn't matter to my point) has lead to such a large deviation from the observations that the null model has been rejected. The null model has been rejected correctly. This is not a false positive.

The problem here is not false-positives due to faulty stats. It is the poor mapping between the hypothesis the researchers want to test, and the null hypothesis they are actually testing.

The tools provided by statisticians look like they worked just fine in this case. If the researchers have decided to use them for GIGO, that is not a statistical problem.




> This is the same misconception I have been trying to dispel. The null hypothesis is not the inverse of that "layman's" statement, it is a specific set of predicted results calculated in a very specific way.

The entire point of this discussion is that the software is not calculating the null hypothesis that users expect. That the current model it does in fact calculate is internally-consistent is tautological and irrelevant (though actual bugs were found in at least one package).

As you yourself said: "I didn't read the code, or even the paper very closely." (https://news.ycombinator.com/item?id=12037207) Perhaps you should do?


There really is no reason for me to read the paper closely. They say that the null hypothesis is wrong, and they know exactly why. Then, like multiple people responding to me here, they also want to say somehow the null hypothesis is true.

Everyone who has questioned me also does admit there is something wrong with that hypothesis they tested. You do it too: "the software is not calculating the null hypothesis that users expect", but then you also want to say the null hypothesis they tested is true! Just bizarre, what underlying confusion is making people repeat something clearly incorrect? The null hypothesis cannot be both true and false at the same time.

There is a big difference between a "positive" result that is a "false positive" and a "positive" result due to a false null hypothesis. This is a clear cut case of the second.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: