> A common finding is that the further one goes from pure math, the worse the provable facts become with social science sitting furthers away in the spectrum.
I think this is true, but it's not because of the distance from pure math. It's a choice made by researchers and journals regarding the standard of evidence required.
Physics demands very strong evidence for publication, a p-value of 5-sigma or more, less than a 1 in a million chance of getting a publishable result from a random experiment.
Social science is typically satisfied with a p-value of 0.05, a 1 in 20 chance of getting a publishable result from a random experiment. That means a whole lot of results are published that are nothing more than the scientific equivalent of dice rolling snake-eyes.
In fact, rolling snake-eyes is less likely (1/36) than getting a publishable result from any given social science experiment (1/20).
Fortunately, this also means it's an easy problem to solve, if the will existed. Simply requiring a much higher standard of proof would filter out a lot of the false social science results.
I think this is true, but it's not because of the distance from pure math. It's a choice made by researchers and journals regarding the standard of evidence required.
Physics demands very strong evidence for publication, a p-value of 5-sigma or more, less than a 1 in a million chance of getting a publishable result from a random experiment.
Social science is typically satisfied with a p-value of 0.05, a 1 in 20 chance of getting a publishable result from a random experiment. That means a whole lot of results are published that are nothing more than the scientific equivalent of dice rolling snake-eyes.
In fact, rolling snake-eyes is less likely (1/36) than getting a publishable result from any given social science experiment (1/20).
Fortunately, this also means it's an easy problem to solve, if the will existed. Simply requiring a much higher standard of proof would filter out a lot of the false social science results.