Hacker News new | past | comments | ask | show | jobs | submit login

I was amused to read this:

"They never found the cause for the disagreement, but in late 2014 the NIST team achieved a match with the other two"

at a time when this story, also from Nature, is also on the front page: https://news.ycombinator.com/item?id=10383984




I was a bit more concerned then amused.

Having worked with some folks who do these high precision measurements, it's concerning that they never found the cause for disagreement. Pinning down systemic error is really, really, really hard.

As Feynmann pointed out in reference to the oil drop experiment(https://en.wikipedia.org/wiki/Oil_drop_experiment):

"We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of—this history—because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that..."


Regarding the Feynmann quote, I suspect that behavior itself is intrinsic to humans. Few people want to be the guy coming up with a wildly different answer from a consensus/authoritative answer. We see the same thing in election polling results exhibiting 'herding'. If a pollster has a result wildly off from the polling average (especially, near the election when the variance is expected to be lower), they'll sometimes 'put a thumb on the scale' as fivethirtyeight[0] put it. This makes some sense, from a CYA perspective even if not from a scientific one. If they publish an outlier and are wrong, they look bad. If they publish close to the average and are wrong, well, at least everyone else missed it too.

[0] - http://fivethirtyeight.com/features/heres-proof-some-pollste...


I don't think it's a human thing; I think it's a problem whenever you rely on previous knowledge or sharing knowledge, which science is exactly about. There is no practical experimental set-up that systematically reestablishes all prior knowledge from scratch, so there has to be trust in other scientists and reevaluating that trust must be grinded out the same way we grind out new scientific results: methodically, reproducibly.

What is problematic when it comes to humans is how our social structures are organized for doing science. They are hierarchical, resources are controlled centrally, and scientists are forced to compete with each other instead of cooperating with each other. Science is a career. We injure science and scientists by tying up their economic prosperity with their ability to convince the rest of the world that their work is worth anything. This creates a huge incentive to push forward and a huge disincentive to reevaluate past results: Your reputation can be damaged because you might be undermining the legacy of a high status scientist, and if you confirm the past result then you haven't done anything new and that reflects poorly on your 'performance'.

Science succeeds in spite of status, institutional monopolies, and hierarchical social organization. It would flourish in a more egalitarian society.

Is it intrinsic to humans to be hierarchical and status based? I want to say no. I don't think so. It is in the interests of the prevailing powers of the world to convince people that it is the case though, because they'd rather we not imagine a world where there isn't power to accumulate and hold on to.


Another thing is is if you do work in fields considered "fringe". No matter how diligently you follow the scientific method, you'll be ridiculed if you find the "wrong" results.

I've come to the opinion that scientists are ideologues, but the ideology is based in the current understanding of physics rather than the results of experiments and the scientific method. A famous example is Arago's dot (aka Poisson's dot), where Freshnel was ridiculed for his wave-based theory of light by Poisson despite the latter not even bothering to do an experiment.


> I suspect that behavior itself is intrinsic to humans.

Sounds like Anchoring https://en.wikipedia.org/wiki/Anchoring




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: