Hacker News new | past | comments | ask | show | jobs | submit login

>One of my formative experiences as a PhD student, in 2011, was submitting a replication study to the Journal of Personality and Social Psychology, only to be told that the journal did not publish replications under any circumstances (you might be thinking, “WTF?” — and we were too).

A while back, I asked a psychology professor why replication studies were frowned upon. She said something like "studies need to be unique and never been done before" and "there's no money in replicating someone else's work". That's why replication studies are frowned upon.

If that's true, then I'm guessing we'll continue to get more Amy Cuddy's popping out of the social sciences.




Thanks for mentioning Amy Cuddy. I remembered her TED conference but hadn’t heard about her since. In the meantime, however, I’ve been exposed to the academic world of social psychology. And I’m not surprised about the phenomenon of Amy Cuddy. I’ve seen a fair share of professors using a social psychology course to simply preach what could be called progressive ideology. The funny thing is I entered the field with progressive views myself, but I’m growing more and more confused seeing the bad faith on display.


It probably takes a different type of academic to do replication studies. It's a specific kind of detective work, to see if you can expose someone else's mistakes which are sometimes even deliberate.

We need more of this type of people.

EDIT: In addition to this, perhaps we should stimulate new students to do a replication study as part of their education.


It takes the sort of person who doesn't care about collecting a bunch of enemies, possibly well-connected ones.


Also a person that doesn't care about ever being published in "approved" journals, magazines, etc.


Imo. this would actually be a great target of public funding. A large international research organization with the the sole purpose of replicating, validating and criticising existing research.

As you correctly point out, the skillset is slightly different to doing novel research, the attitude is different as well, so it would make sense to pool talent. Since there is no money to be made directly with it, governments should step in. And to avoid the problem of journals not wanting to publish replications, the institute would have to be well funded enough to self-pubpish with prestige.


What does money matter once a study is already done? Finding the right incentive balance between novelty and rigor isn't trivial, I get the debate surrounding grant funding. But there should be 0 debate from the perspective of what is publishable, given that it's already done.

My guess is part of the resistance is fear from established PIs that their work won't actually replicate. Even if a minority of profs, I suspect some of the most famous and powerful ones have cut corners or equivocated at one point or another, to get to where they are. It's easy for many other PIs to then get swept up in the "opinion of the field".


Journals want highly-cited papers because that boosts their impact factor, makes them more popular and therefore makes them, yes, money.

A paper claiming a new result generally gets more citations than a paper saying "we replicated the study of Doe et al. and reproduced the results", even if the latter is equally or more useful from a scientific standpoint.


And it's much more deeply rooted in human nature than merely the design of peer review or the quantitative metrics used for evaluating scientists.

Indeed, it is not just about money either. Intangible prestige and status among the scientific expert community is just as much, or even more, coveted. Do people read your papers and talk about it at dinner parties? Do you get invited to give talks at prestigious institutions? Do a lot of interesting and similarly active people turn out to your talks? Do people with good connections and resources want to collaborate with you on exciting ideas?

And it turns out that what people including scientists actually care about is novel, bold, visionary ideas, not drone-like repetitive meticulous detail-oriented work following the footsteps of some other group. People want something new, something cool, something flashy, something sexy, something surprising. Not just the media! Scientists themselves, too!


Most scientists only want something flashy within the confines of the "cookbook" they've already learned on how to do studies. NIH grant applications are formulaic as hell.

Moreover, many labs organize around a methodology like fMRI rather than around any particular type of question. Imagine planning a multi-decade future career around a single tool that exists today and then pretending you truly care about novelty.

You're spot on that people care deeply about academic status. But what is valued in gaining that status has become deeply broken. The fastest way to status is to put out multiple overhyped individual publications that meet the minimum viable novelty threshold for inclusion in a good journal. Half of the battle is a marketing game.

Increased rigor is a time and money commitment that many aren't willing to make, but true novelty is a much bigger risk and a good way to kill a career for anyone not yet tenured.


My larger point is that this is a general human problem, not a science-specific one. Voters listen to the flashy corrupt demagogue politician who speaks to emotions, not the boring one who speaks in nuance and works transparently. In dating people complain about the other gender being shallow and overlooking deeper values. On TV, people watch garbage reality shows so those make the most money. On YouTube, people click thumbnails with obnoxious facial expressions so those win out. In the movies the safe bet is to churn out films from the same franchises as before (not unlike the scientist who barely changes things between papers).

It's not gonna change, one has to learn to accept it and to adapt to it.


I think this is less clearly true than the funding hype imbalance though. I've seen PIs complain that a meta analysis took citations away from their paper that was included in said meta analysis. A replication study that is well written, particularly if it synthesizes replication results of multiple studies, could overtake original work in citations over time IMO. Doubly so if published in a journal that is already fairly reputable.


What is the significance of Amy cuddy to this discussion?


Per her Wikipedia [0], she authored papers about the debunked power posing.

> The theory is often cited as an example of the replication crisis in psychology, in which initially seductive theories cannot be replicated in follow-up experiments.

[0]: https://en.m.wikipedia.org/wiki/Amy_Cuddy


And is still making money off of motivational speaking based on it.


This might explain Amy Cuddy better: https://en.wikipedia.org/wiki/Power_posing


With so many of these stories, is it any wonder that people now refer to this as Science™?


not only social sciences. Except for the very visible areas of ML, the same happens in actual science and engineering...

Example: the thousands of fraudulent XRD spectra of made-up compounds.


Even in ML, it's common knowledge that the long tail of papers demonstrate brittle effects that don't really replicate/generalize and often do uncomparable evaluations, fiddle with hyperparameters to fit the test data, use various evaluation tricks (Goodhart's Law) to improve the metrics, sometimes don't cite better prior work, etc. etc. Industry people definitely know not to just take a random ML paper and believe that it has any use for applications.

This isn't to say there are no good works, but in a field that produces >10,000 papers per year, the bulk of it can't be all that great, but academics have to keep their jobs, PhD students have to graduate etc. So everyone keeps pretending.


those papers are not "very visible" ML (NeurIPS and co.), but domainspecific "applications" and there's tons of it (as I am acutely aware).


What I wrote also applies to most papers at top tier conferences, like Neurips and CVPR. There are thousands of papers published per year even just in those top conferences. What gets picked up by the media or even just reaches the HN crowd is just a small tip of the iceberg.


> Example: the thousands of fraudulent XRD spectra of made-up compounds.

Interesting, didn't hear about this case before. Can you provide a link?





Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: