Unless I'm missing something, that statement appears to be correct. They don't say their result isn't erroneous. They say there is statistically only a very tiny chance that the correlation is due to chance. If the correlation is caused by an error in their experiment, their statement is still correct, since the correlation was not due to chance.
>They don't say their result isn't erroneous. They say there is statistically only a very tiny chance that the correlation is due to chance.
Again, a p-value is not the chance that a result is erroneous - the statement you made is 100% mathematically incorrect. That was a major point of my previous post that you missed. It's a common misconception that the authors of this work are victims of. The linked PDF from the American Statistical Society explains things in more detail.
You misread me. I said that no one said a p-value is the chance the result is erroneous. They said nothing about errors whatsoever in the quoted statement.
Apologies for the double negative in my last comment.
Without getting nitpicky to the point where it's impossible to say anything at all without complaining it's not a perfectly complete and true statement, their statement still seems correct to me. Especially given that this is a statement for non-statisticians and they can't exactly go on a 5 minute tangent to explain exactly how p-values work.
Perhaps it would have been more accurate if they phrased it: "which means a chance of less than one in a billion of seeing a correlation this strong if it's due to chance", but I think few readers would even notice the difference.
They could be p-hacking, though getting 6 sigmas through p-hacking without outright fraudulent data is actually some pretty impressive level of p-hacking. I'd say it's more likely they either made an error or there really is a correlation.