Hacker News new | past | comments | ask | show | jobs | submit login
The Control Group Is Out of Control (slatestarcodex.com)
179 points by nbouscal on April 29, 2014 | hide | past | favorite | 74 comments



This is a very interesting review of some of the most challenging published literature on the reliability of psychological science. The author's point is correct that parapsychology (ESP and the like) basically has no prior plausibility. Yet a researcher who has the chops to do reliable research, Daryl J. Bem, has conducted "experiments" that appear to show that some human experimental subjects can see into the future. As the article says, "Bem definitely picked up a signal. The only question is whether it’s a signal of psi, or a signal of poor experimental technique." Please read this submission from beginning to end and follow the links to see more of the background.

I had the opportunity to discuss the original Bem paper on psi in the journal club I regularly participate in. The conclusion is plainly wrong, but the error in procedure is subtle and it took a while for response papers to come out about his initial finding. (And everyone agrees that Bem is a smart man, so the subtle errors are all the more dismaying.) If Bem can't design experiments carefully, who can?

AFTER EDIT TO RESPOND TO QUESTIONS IN ANOTHER SUBTHREAD HERE:

Another participant is asking why parapsychology has poor prior plausibility. Parapsychology is a rather recent discipline, always with sloppy experimental practices,[1] and meanwhile the disciplines of physics and medicine are much older. The study of how information is transmitted by electromagnetic radiation and how the human body perceives sensations is incompatible with supposed abilities like ESP, and completely incompatible with seeing into the future in the manner described by Bem. Extraordinary claims require extraordinary evidence. A million dollars says that psychic claims are bogus,[2] and so far no one has won a million dollars by proving that proposition wrong.

[1] http://www.skepdic.com/projectalpha.html

[2] http://www.randi.org/site/index.php/1m-challenge.html


To your point on poor plausibility, I'd add that people have been trying to prove and develop their psychic powers for millennia. If you look at other common human traits, you can see that we're pretty good at exploiting our natural capacities. E.g., how the various martial arts traditions have taken modest raw capacity and learned how to develop it to an amazing degree. Ditto science, art, engineering, music, poetry, and literature.

The failure of all of the various paranormal traditions to develop anything demonstrable despite a lot of effort suggests that there is little or nothing there.


> The study of how information is transmitted by electromagnetic radiation and how the human body perceives sensations is incompatible with supposed abilities like ESP, and completely incompatible with seeing into the future in the manner described by Bem

You're not stating this strongly enough. It's not just the "study", i.e., the theory of how information is transmitted, and it's not just EM. Physicists have experimented for decades looking for any kind of interaction other than the four known forces (strong, EM, weak, gravity) at every distance and energy scale. They have found none. So the statement that ESP, precognition, etc. are incompatible with the known laws of physics is not just a statement about our theories; it's a statement about the experimental facts. If there were any real interactions that could mediate these kinds of abilities, we would already have found them. Sean Carroll explains it here:

http://blogs.discovermagazine.com/cosmicvariance/2008/02/18/...


And yet entanglement causes changes to occur instantaneously over unlimited distances. What's worse, there are many readings of it where one's own consciousness impacts that change.

These things occur, that is a given. How or why is left to interpretations like the Many Worlds interpretation, the Transactional interpretation, the Bohm interpretation, Quantum Information, etc. The point is these interpretations are more of the philosophical ramblings of scientists saying "It's the way I think it is because X and Y and Z."

Psi is an interesting topic. I think the likelihood of it existing is small, but if you'd asked anybody in 1900 what the likelihood of quantum physics existing was in the form it has ultimately been found, you would have been laughed out of the room without an answer (even though even Newton had at least some reservations about whether light was a particle or a wave).


> And yet entanglement causes changes to occur instantaneously over unlimited distances

That is an interpretation; specifically, it's the Copenhagen Interpretation. The other interpretations you mention all say different things about what is "really happening" with entanglement. The only other one I'm aware of that would describe it similarly to "changes occurring instantaneously over unlimited distances" is the Bohm interpretation, but AFAIK that interpretation only works for non-relativistic QM; there is no relativistic version of it, which means it can't be right as it stands.

In any case, one thing all the interpretations agree on is that entangment cannot be used to transmit information instantaneously over unlimited distances. So entanglement can't produce the sorts of phenomena that are claimed to occur in psi. So the experimental facts are still just as I described them.

> if you'd asked anybody in 1900 what the likelihood of quantum physics existing was in the form it has ultimately been found, you would have been laughed out of the room without an answer

That's because nobody had actually done the experiments we've done now. We have quantum physics because experiments forced us to it, not because theorists thought it was a cool idea--i.e., because experiments ruled out the "classical" view of the world that everybody had before quantum physics. So it seems perfectly reasonable to me to rule out psi based on experimental data that rule out the existence of any interactions that could cause it.


For the record, my comment was not meant to imply that entanglement is the source of psi; it is rather clear that it is not (regardless of how many pseudo-scientific books on Amazon say otherwise). My point is that the world became much more weird than we would have given credit for a century ago or so ago, to the point that we still don't fully understand it (even if we can use it to make predictions to a remarkable level of accuracy), after all, to cover it today, we have to manufacture new universes every time a person makes a decision to make the data fit (obviously, I'm reaching into hyperbole to make a point, but only because I put the person in the middle).

Could psi be an exhibition of some other weirdness, something in the realm of the indecipherable thing we call consciousness or some other effect we have yet to find? I think the odds are very long against it, but they are probably greater than zero. We live in a universe that may be a 3D hologram above a 2D reality; that leaves lots of room for things we don't understand. I'm not ready to concede that we know everything (but while a handful of current psi researchers are interesting, I'm fully on the skeptical side).


> Could psi be an exhibition of some other weirdness, something in the realm of the indecipherable thing we call consciousness or some other effect we have yet to find?

No, because, once again, if there were any such weirdness that could transmit information through non-sensory channels, or allow people to influence matter with their minds, etc., etc., we would already have seen it in the experiments we have run. Saying magic words like "consciousness" does not change that.

Yes, we don't know everything, so we can't rule out everything that isn't covered by our current knowledge; but in the areas of physics we do know, we have a very good understanding of the boundaries of our knowledge, which means we have a very good understanding of what is ruled out by what we already know. The Carroll article I linked to goes into all this.


There are plenty of reasons the $1M might have not found a psychic new owner yet.

(1) Randi is condescending and downright nasty to people who identify as psychics. $1M might not be worth being in his presence, let alone scrutinized by him.

(2) Randi has a lot invested in being right, so the tests could be downright rigged

(3) Many psychics propose all humans radiate quasi-electromagnetic energy, and their power comes from consciously directing it. Randi could therefore be unintentionally interfering.

(4) The test could be inadequate to describe the psychic phenomenon. For example, the psychic could be skilled at generating a particularly strong placebo effect, which would get written off regardless.

(5) The results might dismissed because they would invalidate too many existing theories, all of which have been constructed on the assumption that psi does not exist. A frequently accepted circular argument.

I'm not saying any of these explanations are true, but I am saying that no one knows either way.


Just one point:

> (3) Many psychics propose all humans radiate quasi-electromagnetic energy, and their power comes from consciously directing it. Randi could therefore be unintentionally interfering.

If they really believe that's the case, then they should make a different million dollar claim: that they can remotely detect Randi's presence or absence in the monitoring room by observing whether or not their psychic powers are currently working. It's very easy to imagine a test protocol for that.

Of course, one can invent an even more elaborate effect that would explain why that test also wouldn't work, and then I can design a test for this new idea, ad infinitum.


I can't speak of psychic energy as it's not well-defined but if anyone is interested in some interesting science on radiation emitted by humans, here are two good studies:

http://www.ncbi.nlm.nih.gov/pubmed/1767800

http://www.ncbi.nlm.nih.gov/pubmed/23865304


I will note for onlookers that your critique of the James Randi Challenge for persons who claim to have paranormal abilities is largely responded to in the FAQ[1] on the website for the challenge, which I invite you and all interested readers here to read. Certainly, if you personally know someone who has paranormal abilities, you should encourage the person to gain the one million dollars and then tell the world about it.

[1] http://www.randi.org/site/index.php/component/content/articl...


That's a long FAQ, which argument has been answered by which FAQ entry? Be specific.


2, 3, and 4 have all been dismissed by the explanation of the tests offered to me in the past, particularly in pointing out that the person being tested has a great deal of influence on both the specific claim being tested and the way in which is is tested. The tests happen on a basis of consensus between the two as to what would be a reasonable test, rather than the testers simply dictating.

1 denies the history of psychics taking the test and speaking about it in relatively friendly terms, and 5 denies the history of major scientific upsets caused by evidence backed theories which contradicted the established school of thought.


Fair enough for 1-4.

As for 5, it's just plain hard to get solid results when studying humans, which seems to leave plenty of room for outright ignoring evidence.


FAQ 1.4: "we ask the applicant to design the test".

FAQ questions to Randi: "The hardest part has always been to get the claimant to state clearly what he or she thinks they can do, under what conditions, and with what accuracy. Most are very vague about these aspects, and very few have any notion of how a proper test should be conducted. We at JREF sometimes take months getting those matters settled"

FAQ 5.2: "All tests are videotaped and stored. You will be asked to state, on camera, that the protocol is fair prior to the test. You will be asked to state it again afterward. If you do not feel that the protocol is fair beforehand, you are not obligated to continue the test."

The videotaping particularly, and videotaping compating against a mutually agreed specific plan that the applicant designed, leaves little room for outright ignoring what happened.


> leaves little room for outright ignoring what happened

I'm not saying Randi is ignoring evidence. The scientific community is ignoring evidence in parapsychological papers that conform to current standards of scientific scrutiny. There are legitimate causes to examine scientific methodology, but doing so specifically to be able to ignore parapsychological evidence just speaks "bias" to me.

But I am saying that absence of proof is not proof of absence. Randi could well be selectively attracting poor psychics. Lady Gaga didn't go on American Idol.


(1) has to be one of the weakest arguments for psychic phenomena I've ever seen, and that's saying a lot.


I'd say it's good enough as an argument for not outright dismissing psychic phenomena.


It was an interesting read, but the author showed absolutely no clue why pre-registration was important. The point is to avoid selection bias. If the same exact study is done 100 times, but only the best 50 results get included in your study, a meta analysis of those results is going to show a strong statistical bias. No amount of waving around the word "Bayesian" can fix this - the only way to fix it is pre-registration of honest results.

Why am I so certain of this? Because of the following passage:

By my count, Bem follows all of the commandments except [6] and [10]. He apologizes for not using pre-registration, but says it’s okay because the studies were exact replications of a previous study that makes it impossible for an unsavory researcher to change the parameters halfway through and does pretty much the same thing.

But that misses the whole point. Whether the studies had the same or different parameters is irrelevant - the fact that you see the ones that worked out and don't see the ones that didn't is the only thing that matters. And the fact that the meta analysis didn't get this right makes the output statistical garbage. No further discussion needed.


To give a concrete example, let's say I want to show that I can influence a coin flip with my mind in order to produce a heads result every time. I do this by publishing trials where each instance involves me flipping a coin four times.

I conduct 16,000 trials. On average, 1,000 of them will show that I can flip a coin and get heads every time. If all I publish are these trials, that yields the amazing result that I can, in fact, influence the coin! Furthermore, the p-value is phenomenally small. And it's very easy to design this experiment to meet all the criteria mentioned in the post other than preregistration.


> It was an interesting read, but the author showed absolutely no clue why pre-registration was important. The point is to avoid selection bias.

You've missed his fundamental point: the mechanisms he listed like pre-registration are excellent for defending against plucking signal out of random sampling error. They do nothing for when you are investigating a real effect which is not what you think you are investigating: the point of the spooky studies listed is that in the psi experiments, there is still a signal of something. We don't think it's actual psi in the sense of supernatural activity, but the alternate explanations seem almost as bizarre and have almost as painful implications.


I don't think the btilly thought he was refuting the article, just commenting on a not-very-central-but-important misperception that it seemed to contain, and it seems to be accurate - implicit precommital to experimental details in a replication solves some of what pre-registration is about but not all of it and arguably not the more important bit. This seems to be true whether or not the article's thesis holds, and this being true doesn't undermine the article's thesis.


I did not miss his fundamental point. I was making a different fundamental point about why I view virtually every meta-analysis out there to be even more suspect than regular research.

It was the point which is most commonly pointed out by Randi et al, but which would be most worrisome for the author because it throws into serious question the kind of research that he thinks should be most reliable.


> I did not miss his fundamental point. I was making a different fundamental point about why I view virtually every meta-analysis out there to be even more suspect than regular research.

It certainly sounded like you missed it, but regardless: meta-analysis is well-aware of the selection issue. Dealing with that is half the point of meta-analytic techniques - p-curves, heterogeneity, funnel plots, the binomial test, trim-and-fill, etc. Publication bias is an issue which has been repeatedly quantified, and these techniques work reasonably well: when meta-analyses are compared to very large RCTs where selection issues are not a concern, the agreement is pretty good (confidence intervals are blown a bit more than they should be, but not horribly so). So you're either ignoring OP's very interesting points or you're tendentiously overrating an issue. Neither is a worthwhile comment.


There is this rather amusing section, though:

"(Richard Wiseman – the same guy who provided the negative aura for the Wiseman and Schiltz experiment – has started a pre-register site for Bem replications. He says he has received five of them. This is very promising. There is also a separate pre-register for parapsychology trials in general. I am both extremely pleased at this victory for good science, and ashamed that my own field is apparently behind parapsychology in the “scientific rigor” department)"


After reading that article, I actually came away with a very positive view of the field of parapsychology in general. Of course psychics aren't real, but the Flying Spaghetti Monster can be used to make legitimate points about the study of religion, to use an analogy.


He addressed ways other than pre-registration you can deal with selection bias:

7. Address publication bias by searching for unpublished trials, displaying funnel plots, and using statistics like “fail-safe N” to investigate the possibility of suppressed research.

It is, of course, possible with cleverness and organization to get around that sort of analysis in a way that it isn't with pre-registration, but that requires an actual conspiracy rather than just not publishing negative results.


...but that requires an actual conspiracy...

No, it doesn't require actual conspiracy. All that it requires is that a subpopulation of researchers has consistent biases which you didn't completely correct for. Given how statistics works, the larger the meta analysis, the smaller the portion of biased researchers you need to come up with compelling statistics.


From A Meta-Analysis of 90 Experiments on the Anomalous Anticipation of Random Future Events [1]:

> [...] a meta-analysis of 90 experiments from 33 laboratories in 14 different countries which yielded an overall positive effect in excess of 6 sigma with an effect size (Hedges’ g) of 0.09, combined z = 6.33, p = 1.2 × 10-10. A Bayesian analysis yielded a Bayes Factor of 7.4 × 10-9, greatly exceeding the criterion value of 100 for "decisive evidence" in favor of the experimental hypothesis (Jeffries, 1961). [...] The number of potentially unretrieved experiments averaging a null effect that would be required to reduce the overall effect size to a trivial value was conservatively calculated to be 520. An analysis of p values across experiments implies that the results were not a product of "p-hacking," [...].

[1] http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2423692


So... I know next to nothing about parapsychology itself, but have seen a lot of the drama around it, and I just have to ask: is it not intellectually dishonest to call something the 'control group of science' when their results overwhelmingly support the hypothesis? I have a hard time seeing how this is different from any other form of science denialism. "We don't like the results because they clash with our preconceived notions of how the universe works so we made up this thing to ignore your evidence"? That's hardly a valid complaint. Basically, on what grounds can people claim one field to be nonsense (e.g. calling parapsychology the control group of science) but not others? Can someone explain this to me?


It's a valid question. The article takes the following premise: Parapsychology gets positive results following the scientific method as defined by prevailing norms in the scientific community. So why can't we draw the conclusion that parapsychology is as legitimate as other branches of science? What is to separate it?

I would offer a pseudo-Bayesian[1] answer to that question. Parapsychology aims to prove hypotheses that lacks theoretical foundations. Our current understanding of physics and biology weigh strongly against the existence of psychic phenomena. Even before any experiment is conducted, we must admit that psychic phenomena are unlikely to exist. Our experimental results must be evaluated in light of that prior probability.

Thus, parapsychology is and should be held to a higher burden of proof than other branches of science. We should demand more rigorous experimental designs, stronger effects, and smaller p values. This XKCD presents a similar idea, if you substitute "psychic phenomena exist" for "the sun has gone nova":

http://www.explainxkcd.com/wiki/index.php?title=1132:_Freque...

[1] I say "pseudo" because I'm not a statistician by trade. I'm basing my argument on my rather superficial understanding of Bayesian statistics. I still think it's a valid argument in its own right, but I don't claim that it's an accurate representation of Bayesian statistics.


Parapsychology aims to prove hypotheses that lacks theoretical foundations.

This is the lynch pin for most scientists. Before you can have a hypothesis, you must have a theory, and you can work to prove or disprove that theory by experimenting to create or observe results that the theory predicts. Parapsychology doesn't have good, testable, theories. Rather it has some interesting unexplained correlations.


Doctors rejected hand washing for a long time because there was no theory behind why it would improve patient care, even when there was overwhelming evidence showing it decreased mortality rates.

Do you discount all evidence that doesn't fit into your world view? Maybe theory hasn't caught up with evidence yet.


The answer of course is no. The degree to which we think its shit is related to how much we know about related fields and the size of the effect among others. The prior isn't binary.

For the topic at hand, ESP would most likely invalidate quite a bit of physics no one is questioning for other reasons and there is no proposed theory to explain the effects. Our confidence in the studies is rightfully close to zero.


> Doctors rejected hand washing for a long time because there was no theory behind why it would improve patient care

The germ theory of disease and experiments supporting the theory predate medical sanitation. (Pasteur's work did come after, but he wasn't the first.) The medical community's initial rejection of handwashing was not due to a scientifically motivated demand for a sound theory. Rather, it was due in large part to doctors' unwillingness to believe that they were the ones spreading disease from patient to patient. Additionally, the medical community at the time did not embrace the scientific method to the extent that it does today. Had it, handwashing would have been evaluated in a controlled study and proven effective.


http://en.wikipedia.org/wiki/Ignaz_Semmelweis

> Despite various publications of results where hand-washing reduced mortality to below 1%, Semmelweis's observations conflicted with the established scientific and medical opinions of the time and his ideas were rejected by the medical community. Some doctors were offended at the suggestion that they should wash their hands and Semmelweis could offer no acceptable scientific explanation for his findings.

He published a lot on the subject and was rejected because it didn't fit in with how doctors saw the world and he had no theory to back up his findings. It wasn't until Pasteur that germ theory gained any widespread acceptance so it's pretty irrelevant that other people thought of it first.

Plenty of things are evaluated in controlled studies and still rejected today. For example, the article that you presumably just read discusses one such thing.


Individual experiments give false positives and false negatives all the time. Think about how you might film this using a fair coin: http://www.youtube.com/watch?v=X1uJD1O3L08 Hint 10 heads is only a 1/1024 chance but 1024 is not that rare. Now realise that 5 coin flips is a 1/32 chance and 1/20 is considered acceptable to publish...

Science is not based on a single experiment it's based around replication of experiments by different people at different times using slightly different methods. Parapsychology repeatedly fails this test to the point where there is a million dollar prize for anyone that can demonstrate anything in a controlled setting.


The process to win Randi's prize is not science, it is public relations for the ideology of elimination materialism.

Science is the testing of a hypothesis by applying an instrumental injunction to reality, apprehending and analyzing the results, and sharing/verifying the results in with a community of experts who have also performed the same injunction (reproducibility).

Randi's process shares none of these characteristics with the scientific process. It is entertainment, a publicity stunt that has little to do with science other than to muddle the waters when discussing these topics.


There are two differences. The first is that you have to replace "preconceived notions of how the universe works" with "vast body of experimental data and scientific understanding of how the universe works". Our evidence for how physics works vastly outweighs current parapsychology research results. The second difference is that the evidence is not being ignored (at least by the author of the article), but is being taken to indicate a real effect, it just isn't the effect of psychic powers. Our knowledge of physics means that the existence of psychic powers is given a much lower prior probability than the possibility of widespread experimental error and bias. Since both explanations are likely to produce slight positive results, the existence of psychic powers is still unlikely once you take the evidence into account.

However, the important point in the article is that in order to make this inference in an intellectually honest way, you need to significantly increase your estimation for the probabilities of widespread experimental error in ALL scientific studies that use similar methods. Since the methods in parapsychology are pretty good, this has quite a far reaching effect.

In a sense, your question about "on what grounds can people claim one field to be nonsense but not others?" is exactly the same question asked by the author. Except the author isn't implying that parapsychology isn't nonsense, they are implying that many other fields are nonsense too. This makes sense because physics is almost certainly not nonsense.


One man's modus ponens is another man's modus tollens (http://www.gwern.net/Prediction%20markets#modus-tollens-vs-m...).


Have you read the article? It's basically all along this problem...

tl;dr: Yes, that is intellectually dishonest, which is a huge problem. Scientists have two options: (1) accept parapsychology as real, or (2) accept that the "scientific method" (in social "sciences", at least) is insufficient. The problem seems to be that Bem (the author of the study) did almost everything "right", and if we increase the bar of scientific proof so that his study is excluded, so are many others...

The reason why parapsychology is excluded: the real world doesn't support it (noone is earning huge amounts of money on the stock market using psi). However, the real world often doesn't support very small effects (e.g. Einstein's relativity). Fortunately for physics, they can afford to make their scientific methods much more rigorous, so they can study large effects as well as exceptionally tiny effects. Social studies can't, for the time being.


Okay, but that's not actually an answer to my question. My question basically comes down to this:

> Yes, that is intellectually dishonest, which is a huge problem. Scientists have two options: (1) accept parapsychology as real, or (2) accept that the "scientific method" (in social "sciences", at least) is insufficient.

I don't get why the whole thing is such a huge problem. The entire problem rests on needing parapsychology effects to not be real. If that need did not exist, we could just go "Okay, interesting, seems likely that there's something to it then. Let's do more research!" because, you know, we take that approach everywhere else. So my question remains: what is it about parapsychology that makes option two even valid to consider? All I can see is people just not liking that that may be how things work.


Parapsychology is physically impossible, and the evidentiary standards in physics are much higher, so we have much more confidence in our physics results than in these experiments - enough that we can reasonably say that physics is true and parapsychology is false.

(But these experiments are as good as many in psychology / social science - suggesting that many "proven" results in psychology / social science could be false)


> Parapsychology is physically impossible, and the evidentiary standards in physics are much higher, so we have much more confidence in our physics results than in these experiments

Conflicting results don't mean one set is impossible (cf., the long-standing apparent conflict between QM and relativity within physics). Apparently conflicting results without a methodological error in either imply that the explanatory model that appears to be supported by at least one of the results (if not both) is, while useful within its own domain, in some way incorrect.

The whole idea that the models validated by scientific experimentation are binarily true of false is, well, missing the point badly. While over time we hope they approach truth with them, what they are is useful (that is, they have predictive power) to a greater or lesser extent. And quite often the models with the greatest predictive power in two different domains conflict when either or both are extend outside of their own domain.

EDIT: The real problem with parapsychology is that there's little in the way of explanatory models being tested anywhere in the field. There's a lot of hypotheses without models and some experiments testing them, which (concerns about methodology aside, for the moment) might raise interesting questions and serve as inspiration for developing and then testing theoretical models to explain the effects, but very little has been done there -- which makes "parapsychology" more a collection of potentially unexplained phenomena more than a branch of science that provides an explanatory model for some set of phenomena.

Which is very different from most of the social sciences.


> Apparently conflicting results without a methodological error in either imply that the explanatory model that appears to be supported by at least one of the results (if not both) is, while useful within its own domain, in some way incorrect.

When scientist thought they found particles travellig faster than light speed they checked the results, then the equipment, and then they assumed they had made a mistake and asked other people to check the numbers and the experiment. They realised that they had an extraordinary result and they wanted very high degree of rigour.

Some parapsychologists appear to rush to publish weak results and to claim success for flawed experiments.


I'm reminded of the history of the measured charge of an electron. The first experiment to measure it was Millikan's oil drop experiment, which got a value smaller than current measurements. As other scientists made their own measurements (with different experiments), the measured value slowly increased. What is interesting in this is that we would not expect to see a gradual increase in the observed value. The explanation for this is that when people find a value that was "to high" they would look harder for sources of error that would increase the value, causing a systemic bias to under-report the charge.

Similarly, with the faster than light neutrino, we spent far more effort looking for mistakes that would make our answer bigger than it should have been, which introduces the same systematic bias.

The solution to this is to realize that science is a time consuming process, and it is okay to take a while to arrive at the right answer. But, if we are aware of these problems, we can get there faster.


> When scientist thought they found particles travellig faster than light speed they checked the results, then the equipment, and then they assumed they had made a mistake and asked other people to check the numbers and the experiment.

And as it turns out, it was a mistake after all. Physics is solid to a satisfying number of digits after the comma.


> Some parapsychologists appear to rush to publish weak results and to claim success for flawed experiments.

So do some sciences in the supposedly "better" fields [1].

I think the fact of human nature at issue here isn't specific to any particular field of study.

[1] For a particularly well-known example, consider http://en.wikipedia.org/wiki/Cold_fusion#Fleischmann.E2.80.9...


Possibly true, but I think beside the point. You are holding parapsychologists to a much higher standard than the rest of science (except physics.) That's what the entire article is about really.


Yes, that's the point - the fact that parapsychology passes these criteria throws the rest of science into doubt.


You mean, except physics, chemistry and biology, i.e. sciences that are based on numerous, replicable, measurable, non-subjective experiments. In contrast, economics, psychology, sociology, and even medicine (at least the parts that are not performed in a lab, such as biomedicine or molecular biology) are not really sciences, but merely studies.


Jack Parsons, who was quite into parapsychology, phrased it quite neatly: It's when science becomes closed minded and degenerates into ancestor worship.


If anyone here hasn't read the comments on this as well, go read Eliezer Yudkowsky's comment on meta-analysis:

http://slatestarcodex.com/2014/04/28/the-control-group-is-ou...


I've spent a lot of time in the past year studying nutrition as I've tried to optimize my health. I don't believe there is a field that gets science more wrong and, as a result, whose meta-analyses are, to put it in Yudkowsky's words, more bullshit. Unfortunately, I don't think there is another field whose bullshit gets more media play, either (except maybe politicians, but that's a tautology).


I think it's a larger issue with life science that everyone is looking for correlations. In physics, correlation is the hint that leads to the development if a theory, which leads to predictions, which are invalidated by experiments. In life science it seems correlations is the end goal. I think the issue is with the kind of questions one poses, not only with the interpretations.


Just to be clear about how this is incredibly wrong, as clearly this poster has never read life science research, and it's very strange for such a blatantly misinformed comment to stick around on HN.

There are no biologists out there who are just looking for correlations, mistaking them for the real finding. Look through the life science papers in Nature, Science, Cell, PLoS Biology, etc. You will find people starting with a hypothesis, and running controlled, causal experiments, often with two or three different methodologies to establish non-correlational evidence. These are often based on initial observations that are correlational, but there is no publication where people are like "whoa, we found a correlation, our work is done!"

In contrast, there are some subfields where most of the work is correlational, and those are the fields where researchers are limited to analyzing data more than running experiements: epidemiology and bioinformatics. Now epidemiologists collect tons of data, but there's very little that they can do in noncorrelational analysis, because that's the nature of ethics and human life. But that doesn't mean that they don't try to invalidate their correlation-based hypotheses with experiments, it's just that they get to run very few of them. Bioinformatics is somewhat different, and in some ways is much more like a theoretical physics in that developing new methodology can be a publication-worth, even if there was no new data collected to validate the computational methodology.

In short, there's no truth at all to what return0 posted.


That is ignorant bullshit.


The fact that a lot of journals refuse to publish negative results is damaging to science and to our understanding.

Relevant XKCD: http://xkcd.com/882/


Starting out with a flawed analogy to placebos makes the whole article hard to swallow. Placebos are complicated, especially for drugs with receptors in the brain. Stating that we know placebos "don't work" is outright incorrect. We know the opposite: placebos can produce real results, and there are physical mechanisms that explain those results. Nothing paranormal about it.

If the conclusion is that there is no magic wand in methodology that makes an experiment "good" - that's true. But the approach gets off to a rough start.


I don't think the analogy with placebos is flawed; it's just not quite the analogy you are describing here.

The point is not that placebos "don't work"; it's that, if a placebo works as well as some drug, the drug itself doesn't work. That is, it adds nothing to what you can already get using the placebo effect. Which may indeed be a "real" result; but why pay extra for a drug to get it when you don't need to?

Note, btw, that while placebos can indeed produce real results, and there are indeed physical mechanisms that explain those results, those mechanisms have nothing to do with what's in the placebo itself; they are entirely to do with what goes on in the mind and body of the person taking the placebo. In that sense, it's true that placebos "don't work": it doesn't matter what specific substance you put in the placebo pill--sugar, starch, gelatin, whatever--as long as it looks and tastes the same to the subject.


Wait a minute. The author calls for "stricter scientific standards", and than has the audacity to define "strict" as "something that excludes parapsychology".

Say it was 1600 A.D. and we would like to improve the standards of science. Everybody knows the cart stops when the ox stops pulling, so why not use this reasonable observation as a litmus test for all future scientific endeavor? Bye, Newton.

The whole article is just a particular set of beliefs trying to institutionalize itself, the very thing science is supposed to put an end to.


That is not what I took away from the article at all.

What I took away is: if you think parapsychology is wrong, then you should be really worried about the rest of science because their studies are not better designed. It's most clearly expressed in this paragraph:

> This is not a criticism of Bem or a criticism of parapsychology. It’s something that is inherent to the practice of meta-analysis, and even more, inherent to the practice of science. Other than a few very exceptional large medical trials, there is not a study in the world that would survive the level of criticism I am throwing at Bem right now.


Alternatively, you can read it that the author is defining strict as, 'Something that would make me believe parapsychology.' At least if it met those standards.


Yes, I understand that, and I believe it would be unscientific to do so.

If psi does not exist, let science discover that, not preclude it.


If you think psi is a part of reality, then anything that doesn't preclude reality shouldn't preclude psi. It'd be like having a scientific method that somehow precluded testing paracetamol for pain management but allowed testing codeine.

I just don't see how it's meant to work here. If you think that something's part of reality, what's the problem with a stricter standard of proof? If someone screws it up, then it'll be obvious: loads of stuff will become untestable - (worse than just being impractically pricey to test, it will become in principle untestable) - at the same time.

Do you have any substantive objections to making the standard of proof stricter, other than that it would preclude psi?

If you think that a stricter standard of proof would preclude psi but not other things, why do you think psi is an accurate description?

----

Maybe what I'm saying is unscientific. :/

So be it, I guess. There's nothing innately great about science - it's just a tool; a method, to get to the truth. The truth itself is only really of value for its instrumental power, as far as I'm concerned - if you didn't require truth to actually do something, then you could make up any method and believe that its results were true.


It is always right to institutionalize correct ideas, like Newtonian mechanics (while having in mind its later refinements, of course), or the notion that parapsychology doesn't exist.


Ok, but how do you determine which ideas are "correct"?


My you have audacious work ethic, examining every human on the planet just to rule that out. Let it be law, then.


The Ganzfeld experiment suffers from bad statistics but it would sure be cool if it worked:

https://en.wikipedia.org/wiki/Ganzfeld_experiment

It required so many lengthy trials to get statistically meaningful results that only large institutions could pull it off. I think it would make a cool crowdsourced/crowdfunded project today though.

If there's something to it, I'd like to see the relationship to distance and especially time, but I'm not holding my breath.


"You’d have to find a phenomenon that definitely doesn’t exist [...]" (emphasis added)

Welp.


This is one of the better HN posts in a while.

Science is our light in the darkness.

It is not a new form of religion. It does not give us the answers to everything. It may not even be able ever to give us the answers to everything. It does not even give us a way to think.

It does not advance in a linear way. Instead, many false starts are made before progress takes hold.

Better ideas/models do not win out over older ones automatically. Instead, many times an older generation has to die before new ideas can firmly take root.

Funding does not equal knowledge. You can throw as much money as you want at some problems and get nowhere. Many times this is because the questions are being asked the wrong way. But you'll never know -- until progress is finally made.

It's just Bayesian. Science is that study which tells us if we do something or if some state exists --- then there's a very high likelihood that something else will happen or some new state will occur. We try to create models to explain this tenuous causality, but over the generations, models have gotten us into more trouble than they have gotten us out of. If we understand science as just being the study of what most likely follows what, instead of how things work, it's easier on everybody. After all, at the end of the day, even with a model, for you to manipulate the universe around you? You're going to need to know if you do one thing, what other things will happen.

I fucking love science. But it's not what people think it is. We teach science in much too prideful a manner. If anything, the history of how science has been done should give us a deep and overpowering sense of humility.


I agree. Excessive romanticization of science in the mass media is one of the greatest problems facing science today.

Perhaps people are so used to religion that they think science works just like religion, expecting it to offer answers to questions like "what is the meaning of life?" and whatnot.

The ouroboros diagram at the end of the article is an accurate description of not only science but pretty much every human endeavor. We cannot reach "pure unadulterated reality", only a reflection of our beliefs and biases, on the basis of which we form other beliefs and biases.

Fortunately, reflections depend not only on what is reflected but also on the reflector's own properties. Nature is our reflector, so we can indeed make progress over long time scales. To think that science will help us overcome our very human weaknesses, however, is not only arrogant but dangerous. After all, the only way to realize this flawed ideal of pure science would be to eliminate all traces of our humanity. What good is a "light in the darkness", as you said, if the light is so bright it blinds the person who wanted to see?


One of the assignments in my high-school science class was to do a presentation on the history of some scientific question. This meant stating the question, explaining the early hypothesis, explaining the early tests that disproved those, explaining the new hypotheses, explaining the new tests that disproved those, ETC.

As the end of almost every one of those presentations, there was a strong feeling of "then what" hanging in the classroom, until people realized that the most recently stated hypothesis is what they had learned, and scientists have not yet discovered why it is incorrect.


It used to be that part of the magic of becoming an expert in something was learning all the 17,000 things that we still didn't know. Scientists were very proud of all the work ahead of them.

Nowadays physicists still mostly sound that way, but a lot of others, including many fields that we would consider hard sciences, are taking a "we're smarter than you" attitude when dealing with the general public. Even if they are correct in one particular instance, the idea of placing science on some kind of pedestal where it can be asked anything from "what makes a good life" to "what's the mass of an electron" is crazy. A really bad idea.

Science has always been political, but lately it's getting politicized: it's choosing up teams and playing the role of arbiter of truth. That's bad for all of us.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: