What would be the alternative to peer review? White papers on “pop” topics? Opinion pieces by think tanks?
The ideal of peer review is pretty much the gold standard of scientific work. By that, I mean the idea that your work should undergo rigorous scrutiny by your peers before it gets accepted for publications. As any other human system, though, the process is still subject to politics and biases.
My hope is that a truly open review system might one day democratize science and ameliorate the issues with the current review system. For example, it would be cool to put all the revisions, reviews and authors’ comments in the open as an online appendix. That will show how the paper changes as it moves trough the process, the issues that the reviewers brought up, and how the authors responded to them. It will also help show if there is systemic bias against certain researchers or topics.
I think that publication should not be gated by peer review. Release early, release often, publish experimental setup and what you are looking for prior to the actual experiment, etc.
We have arxiv now; the whole process could open up even more, slightly closer to research in the open, like open-source development in the open. This is to make peer review easier.
Then what is currently "publication" becomes what it should be, in the sense of adding value: curation, filtering the most interesting, least dubious results.
You're not the only one to comment this, but from what I know per review is a long iterative process, wher reviewers raise objections and submitted respond with edits.
So, what you're suggesting seems a lot like shipping code before code review, and seems to have many of the same problems. What do you do in the long period before critical issues have been addressed? What do you do if the submitter just says "fuck you, my code works, I have more important things to do"?
arxiv is actually less open than most traditional venues.
Anyone can submit to most conferences and journals. Anyone at all! No PhD or reputation needed and it’s blind. All that matters is the quality of the research.
To submit to arxiv you need to be approved by someone as a legitimate researcher, or beg for reviews from people you’ve never met without any anonymity.
It is my impression that, if the quality of work is good, it is easy to get arXiv endorsements. I believe that intent is not to form a clique nor gatekeep, but rather to keep arXiv from being buried under a giant pile of internet spam and crackpots.
This goes both ways. It’s obviously an unfortunate barrier to entry but it generally means the content on the arxiv is reputable and meets a certain bar (and submission is not in any way anonymous because members are verified which can discourage low quality submissions). Compare the arxiv to vixra. It seems unfair to Cornell university library to have them moderate submissions for scientific quality and those people who currently use it would not want to have to filter through the new papers in their topic of interest to determine which papers were worth reading and which would be a largely wrong waste of time (I think here I’m perhaps being a bit unfair and for many fields there would likely be no time wasters and in others it would be easy if annoying to filter out the p=np or Riemann hypothesis papers)
I don’t think that letting anyone submit is a good solution.
> I don’t think that letting anyone submit is a good solution.
I think their process is totally reasonable and I’d probably do the same... but I do think it’s a fact that it’s now less accessible and more based on reputation and credentials and we should acknowledge that.
I wonder which is more likely though. An unpublished and non-PhD author being published in a journal after submission, or that same author being allowed to publish on arXiv.
My guess would be on arXiv.
If you really have a decent breakthrough, I'm pretty sure your best bet is to directly contact some relevant people in the field.
Which is more likely: you get a paper, yet unreviewed for formatting and style by any expert in the field, accepted into a conference or journal, or you get someone with field expertise to review for format and style, and they let you post to arxiv?
I see. I just meant that we have a place where publication is technically easy. No year-long waits. Everything is immediately available online. No physical paper involved.
I suppose that changing the technoligcal base to fully electronic, immediate-mode publishing should help.
Building a review workflow, or several, around the fully electronic and open archive of all published results should be doable.
Storing online everything by default, including all the negative results, proofs of the null hypothesis, full datasets and code, etc, should not be overly expensive, but would help reproducibility and further research a lot.
> To submit to arxiv you need to be approved by someone as a legitimate researcher, or beg for reviews from people you’ve never met without any anonymity.
Well that’s good for you isn’t it! What about everyone else? Not everyone is automatically approved because of things like their email address. That’s my point - they focus on reputation and credentials rather than the blind value of the work. It’s less accessible.
The confusion comes from the 'journal' aka topic on arXiv, which can have different settings for when an author can submit. Some are more stringent than others. It is not an arXiv wide issue.
Nonsense. Only the editorial board can approve you to publish in the journal or comference. Approximately anyone on Arxiv can approve to you to publish on Arxiv.
The article doesn't propose we do away with peer review, but rather that lay audiences stop pretending that peer review is a "gold standard" by which the quality of research can be judged. It's "gold standards" the author proposes to eliminate.
I get that. Sorry my reply wasn’t very clear. The article gives examples of how politicians and policymakers have “weaponized” peer review to silence critics. The author proposes moving away from peer review (and the gold standard it sometimes represents) as a way to solve this issue.
In my reply, I was just pointing out that peer review still plays an important role in the development of knowledge and I was wondering if alternative systems that could replace it would actually be feasible. That lead me somewhere off topic.
By the way, the “weaponization” of peer review is nothing new. You can see it used in the past to discredit research linking smoking or lead to health issues. You can also see of the possible risks or benefits (depending on your point of view) of rejecting peer reviewed research in the anti-vax movement.
Some journals do this. It is an interesting approach.
I am however ambivalent, or even perhaps against allowing public comments directly into a paper. We've all experienced, no doubt, the a naive but loud internet com-mentor spreading FUD on comment sections and social media on topics they simply know nothing about, and do so prolifically. It is nice to think that good arguments will float to the top, but often it is the loudest and least nuanced argument, right or wrong, which gets taken up.
I'm an editor of a journal that has one of (the?) highest IFs in its field, among other things. Lots of experience with scientific publishing.
I've had lengthy discussions about this with my colleagues, and I don't think the problem is with peer review per se, it's with how peer review, and by extension, scientific literature, is interpreted and utilized. It's not just with the lay community, it's among scientists themselves, as well as scientific consumers as in engineering, healthcare, etc.
People now treat science, and scientific publications, as if it's a farm or factory. The produced commodity is the peer-reviewed paper. Once research, via a paper, is peer-reviewed, it's blessed as the truth, and if it's not, it's worthless. The goal of many institutions is now not to produce sound research per se, but to receive money (largely due to indirect cost income) to produce the commodity of peer-reviewed paper. Once a paper is accepted as in press, it can be cited as if it were truth, which provides a building block for something else. The truth per se doesn't matter, only the citability of it as peer-reviewed.
I realize this sounds a bit conspiratorial but I do think this is basically how it works at this point in time, at least in certain fields. Peer-review is overvalued, as are single papers, and contributions of particular researchers in many cases. That doesn't mean they don't have any value, just that their real value is probably much less than we assume.
I agree that peer review can be improved a lot, and will likely always have an important role in some form, but I don't think fundamental problems in the field will go away unless there's a change in perspective on the real scientific process as a whole, to something more nuanced and gradual.
Personally, I view any study in the less scientific fields (i.e. those that routinely accept p < 0.05 as "proven") as suspect, unless it's preregistered, has full data available for the public to check, and has p < 0.01. (I'm neither aware of any such study, nor have I looked for it... I'm just saying that that should be the minimal acceptable standard for e.g. psychology and medicine.)
I would love to see an open review system like that. Comments out in the open, along with the experiments and experimental data which supports or does not support the work.
Science is all about experimentation, yet we currently measure papers by number of citations rather than by experimental support.
The gold standard of science is a different experiment whose findings agree with the findings of the first.
Replication is much less valuable, scientifically and professionally, than non-scientists think it is. Simply repeating a published investigation runs a much higher risk of repeating any experimental errors or faulty assumptions that might have harmed the first one.
The whole point of science is to find knowledge that persists beyond one particular perspective; knowledge that is independently verifiable. Rote replication is not the best way to find this type of knowledge.
As for peer review, its purpose is simply to sharpen the communication of a completed study. Even if every study was replicated, the papers would still benefit from peer review.
It would be interesting to see if replication is really the glue that holds our greatest scientific achievements together. I suspect not. In my view a higher standard is progress towards powerful unifying theories, even if getting there is a messy business.
I'm thinking of something like relativity or quantum mechanics. Suppose a study in those fields fails replication. The whole thing still holds together, to the point where controversies at the part per billion level make it to the front page of the newspaper. Perhaps even most studies, taken in isolation, would be found to have problems when subjected to the strictest criteria for replication. Choosing replication as a silver bullet would be an unnecessary distraction.
Now, what about fields where there is no unifying theory on the horizon? If replication is all we've got, then sure. I can certainly see the point, especially if the results affect personal decisions (diet, medications, etc.) or public policy.
I suspect that "gold standards" can hurt science as much as help. Telling people that science is bunk because of the "replication crisis" contradicts the fact that messy science has produced results of astounding accuracy and predictive power. Learning from success should be at least as important as installing safeguards against failure.
> Replication is slow and can be extremely expensive
Gold standards are identified as such because they’re the best. Not everything needs to meet the gold standard to be debatable. Peer review is adequate for further research, but perhaps not policy initiatives. An unreviewed paper is enough to start simple inquiries. Et cetera
Replication is a more powerful statement about the validity of research findings than peer review. Lots of valid research findings will not be replicated before they influence other researchers and the public. The point of this piece is that lay audiences shouldn't expect a simple "gold standard" by which they can distinguish the good research from the bad; understanding research requires critical thinking and access to domain expertise.
And there are also ways in which replication is orthogonal to peer review. Replication can't by itself tell you whether a piece of research makes a significant contribution to the field, or whether it is itself derivative, or poorly presented.
It does not have to be slow in all domains. In my field there are many modeling papers. Journals could require that all data and code be open access, or at least define a submission type where this is the case. You could even automate the process of running submitted containers / packages.
All that would prove is that the the code that implements a buggy version of a model gives consistently wrong results. Actual replication requires that somebody else implements the code independently. And that both implementations are checked over a reasonably wide range of parameters over which the model should be valid.
Actual replication is much more effort and consequently slower than just a "docker run".
While I agree that this should be mandatory in many cases, it doesn't prove too much. I've experienced many cases where the open code worked fine with the test data provided, but failed completely when I tried it with my own real world data.
The ideal of peer review is pretty much the gold standard of scientific work. By that, I mean the idea that your work should undergo rigorous scrutiny by your peers before it gets accepted for publications. As any other human system, though, the process is still subject to politics and biases.
My hope is that a truly open review system might one day democratize science and ameliorate the issues with the current review system. For example, it would be cool to put all the revisions, reviews and authors’ comments in the open as an online appendix. That will show how the paper changes as it moves trough the process, the issues that the reviewers brought up, and how the authors responded to them. It will also help show if there is systemic bias against certain researchers or topics.