What you lose on arxiv is peer review. That might sound like a feature- you get
to let others know of your results faster- but if you make a mistake it will not
be caught until it's out in the wild (and possibly until everyone has already
cited it and based their own work on it). With peer-review you know that someone
will read your work that has the background to understand it and to catch
errors.
So what, you'll say, independent experts can catch your mistake if they read the
paper on arxiv and contact you to suggest a correction. Yes, but there's no
incentive to do the hard work to correct mistakes if your paper is already on
arxiv and people are already citing it (who didn't do their due dilligence).
Then errors start cascading. Science is self-correcting, sure, but it's much
easier to avoid errors in the first place. That's an important function of
modern peer review.
I'm speaking from experience, of course. Peer review has certainly helped
improve my work. I didn't enjoy having to redo the work, but the end result
was something I could be confident about. Also, I'm only speaking about
publishing in journals, because peer review in conferences is a very different
beast. In my field, of machine learning and artificial intelligence research,
I'd go as far as to say that peer review in conferences is broken, getting
published or not is a lottery and you're better off putting your stuff on arxiv:
less hassle and more people will read it. You might even see your article posted
on HN. When was the last time a paper published on a conference server was
posted on HN? See?
The way we used arxiv worked well in physics, though this is 15 years ago now so might have changed since.
arxiv was about distribution. It didn't replace peer review - articles were still submitted to journals and published there too.
If an article was posted to arxiv and not a journal, the odds of a citation went down massively. And the journal it was submitted to was a factor in whether or not we read it. When articles were eventually published, most authors also updated the preprint with the post peer review version.
Basically it meant that
(1) it was easy to keep up to date with what everyone was working on, and pick up interesting new stuff
(2) most citations, post 80s, you saw in whatever paper you were reading, you could look up on arxiv and be reading it in seconds.
This is not the case in computer science and particularly machine learning, especially in recent years. You'll find many papers where a majority of references are to preprints that stay preprints for ever. You'll also find many papers that have hundreds of citations, all while remaining preprints forever (and many of those citations are from forever-preprints themselves).
In machine learning, for the most part, arxiv is used to avoid peer-review. Or a way to "publish" work that has been rejected by a peer-reviewed publication, of course.
And to be more cynical, it's also a convenient source of references to pad up a Related Work section and make it look like incremental work is part of a growing body of groundbreaking new work. /jaded
Edit: well, I'm not just being cynical. The fact that everyone can put their half-baked papers on arxiv means that the 90% of work that is crap, per Sturgeon's Law, is now a much bigger quantity than ever before and one must sift through reams and reams of crap before finding work that has any meaningful results to report. Again, that's the case in machine learning specifically. I don't know about other fields.
But those Arxiv papers which were not published elsewhere but which got many citations, those were read by others, i.e. reviewed by peers, i.e. peer-reviewed.
Arxiv only lacks the initial quality filter by peer review.
I'm also working in the field of machine learning. In those niche fields I work more specifically (speech recognition), I can usually still get a lot out of Arxiv-only papers. I can pretty easily see the main idea and see if there is some usefulness in the paper or not w.r.t. my own research e.g. by good experimental analysis. In don't really feel overwhelmed in the amount of papers. I don't really see the problem.
Can everyone put their half baked papers on arxiv? I see people on irc discussing about finding sponsors for their paper and what not and how this part not being trivial at all. I don't know the details about how the sponsoring works for arxiv but seems like it doesn't let everyone post their half papers, at the very least.
> Why do we call it "preprints"? The term seems to imply that work is preliminary or unfinished. As far as I can tell, the term introduced by
@arxiv
, the first online repository for scientific manuscripts, is "e-print". Is "preprint" a marketing device invented by publishers?
I think it's because the publisher retains copyright. There is a limit on how "done" the manuscript can be and still be shared for free online. Some universities have started to fight back against this by limiting the scope of copyright restrictions that publishers can impose.
No, in most of physics there is no such limit in practice. The only difference between my published work and the preprint arxiv versions is the font and whether the layout is two columns or one column. They are word-for-word identical with identical figures.
In ML at least, the arXiv version is the “canonical” one. This is because the conferences have onerous page limits, so the official conference version is usually mangled with lots of cut content. The arXiv will have content restored and be more readable.
In theory you should read arXiv and cite the conference version, but often people cite arXiv and nobody cares because google scholar mostly combines things properly.
Or, instead, you get a much larger audience reviewing your article, instead of the 2 to 4 reviewers involved in the paper publishing process :-)
You can get comments from critical and interested readers, you upload a new version of the paper to arXiv, and repeat the process until the article is ready for submission/publishing.
In other words: I think that an article on arXiv (on average and as a whole, individual articles may be exceptional) is subject to much wider and more extensive scrutiny and verification than the average article during the paper publication process.
Well, if all one wants is an "audience" one can simply report the results of their research on twitter. Much bigger audience. But that's not the point, is it? The point is that you want someone who knows their shit, to review your paper, and who will have both the knowledge and the motivation to engage with your work critically and help you see what is behind all those big blind spots we all have for our own work. I'm sure this happens with papers put on arxiv, occasionally, but my expectation is that just because the paper is on arxiv most people will not be that interested in making a serious effort to review it, and that it is much more likely that such a serious effort will be made by a reviewer in a peer-reviewd venue, particularly one of the top journals (conference reviews are a mess).
So, yeah, I disagree. The level of scrutiny one gets from putting their work on arxiv doesn't compare with peer review by experts in one's field.
People cite preprints in their references. So as an expert you can find the preprints that others based their work one. And you can use a tool like google scholar to find preprints that cite a particular work to build on it. It fits neatly into the same old systems experts always used to find your work. And is much faster.
>> You get incredibly less review at the publication stage — and your idea “experts in one’s field” don’t also use the internet is hilariously wrong.
I'm glad to hear you're amused by my comment. Where did I express the idea that '“experts in one’s field” don’t also use the internet'? Can you please show me?
Honestly the esteem of peer-review does not hold up to the reality of peer-review. It's elevated to this gold standard for quality science, but in practice, it's just not. Not only is some of the real landmarks in science older than peer review as we know it today (until the middle of 20th century, articles were reviewed by an editor), in several fields, 50-90% of peer-reviewed studies fail to replicate.
It's very possible to just keep submitting papers to journals until you get someone who shrugs and lets it through. Reviewers aren't paid, and while reviewing is encouraged it basically does bupkis to further your career (and academia is very cut-throat).
> in several fields, 50-90% of peer-reviewed studies fail to replicate.
Is that a failure of peer-review though? Or is it more of a consequence that even if you do everything right you'll still get some false positives some of the time, and negative results tend not to get published at all?
It raises questions around using peer-review as a mark of scientific quality. If it is, why does peer reviewed science time and time again turn out to have enormous glaring quality problems?
I think the core problem here is the layperson perception of peer review. People often say "it is a peer-reviewed paper!" when trying to claim that a paper is definitive and powerful. But when you speak to scientists they have a much more muted understanding of peer review. And I think that is okay. The amount of effort it would take to do a very thorough analysis of a paper is large, sometimes even approaching the cost to do the research in the first place.
In fact, peer review is often more about novelty and importance rather than rigor. Very well structured research will get rejected if it isn't seen as contributing to the field in a nontrivial way.
A few fields (psych is the big one) are funding replication grants. That'd be the true mark of quality.
> In fact, peer review is often more about novelty and importance rather than rigor. Very well structured research will get rejected if it isn't seen as contributing to the field in a nontrivial way.
This is actually deeply problematic, as a part of what's driving the replication problems is specifically publication bias. If the only thing that gets published are unexpected or noteworthy results, you're selecting for statistical aberrations.
I don't really agree. I think it is valuable to push novel and influential results into visible venues and to reward work of this kind. It is critical to understand how this introduces bias into the process and to consider this when reading a given paper, but I'm not sure that a review system that only considers experimental rigor would be desirable.
I think this is like the difference between homeopathy and medicine. If you're sick and go to a doctor, you have a chance that the doctor will find a way to cure you. If you're sick and go to a homeopath, there's no chance that the homeopath will find a way to cure you. Similarly, peer review has a chance to catch errors before they make it to print, while absence of peer review has no such chance.
Peer review is not infallible, neither is it a mark of scientific quality, as you say. I appreciate that people tend to use it like that- you read, in articles in the lay press, that such-and-such study was "not published in a peer-reviewed journal" as if to say that we can't be sure of the quality of the study and that, conversely, if it was peer-reviewed, we could. That's not right. Peer review is not a sufficient condition to guarantee correct results. It's not even a necessary condition. Like I say in my comment above it is one mechanism of modern research practices that helps improve the quality of published papers.
> If you're sick and go to a homeopath, there's no chance that the homeopath will find a way to cure you
Wrong, sorry. Disclaimer: I'm not a homeopath and don't go to any. Most of them are, in fact, quacks.
There are, unfortunately, chronic diseases which can be treated but not cured. Mainstream doctors and Big Pharma make a living off of those. If you have one of those, "well, why NOT try a homeopath?" is a perfectly rationale response.
"Sinus rinsing" is something that might be termed "homeopathy." It IS medically respectable, unlike most of their "treatments" (like magnets).
Completely agree - the quality of peer-review is a lottery. Sometimes you get an excellent referee making good points, other times you get a referee solely focussed on making sure you cite their work. And, occasionally, you get a referee who doesn't even appear to read the paper and just waves it through.
Borrowing from software engineering, you can publish code to a git repository and someone else can make a pull/merge request with a proposal to make fixes/improvements. There's still work involved in reviewing, revising, and merging those proposals, but it can be less work than needing to make every change yourself.
I wonder if academic papers could benefit from a similar process of communal development (though it flies in the face of the dead-tree format of academic papers, but we're talking about Arxiv here, not Elsevier). Even if suggestions don't get merged, having a repository that captures discussion around the central artifact is also useful (such as when a maintainer decides that a feature doesn't belong in a project).
> With peer-review you know that someone will read your work that has the background to understand it and to catch errors.
This is a remarkably naive (and charmingly optimistic) view of peer-review.
For the vast majority of the history of academic research peer review did not play a major part. It wasn't until the 1970s that the modern day peer review system emerged and it did so as a reaction to a funding crisis in academia to produce the illusion of legitimacy.
It of course has done no such thing. It has done nothing to limit the reproducibility crisis which has shown it's ugly head in nearly all scientific fields of study. One could even argue that peer review coupled with a publish or perish culture is the cause of such a failure of science.
This also means just about any great scientific discovery you can think of prior to 1970 did not go through the peer review process as we know it today. If science seems less exciting today, peer review is certainly one of the reasons for this. I'll leave you with Geoffrey Hinton's words on the subject:
> Now if you send in a paper that has a radically new idea, there's no chance in hell it will get accepted, because it's going to get some junior reviewer who doesn't understand it. Or it’s going to get a senior reviewer who's trying to review too many papers and doesn't understand it first time round and assumes it must be nonsense. Anything that makes the brain hurt is not going to get accepted. And I think that's really bad.
Laking peering review is a feature, and not just because it speeds up time to results.
>> This is a remarkably naive (and charmingly optimistic) view of peer-review.
Great way to join a convesation. Well done.
I didn't say anything about "the vast majority of history of academic research". I talked about my experience of peer review of my work. Are you responding to someone else's comment but quoting mine?
Agreed. Peer-review is essential. I do like the idea of a public peer review however, similar to that in the https://joss.theoj.org/ - I had a really pleasant experience reviewing a paper for JOSS.
I think sometimes reviewers hide behind their anonymity to give snarky responses, and using their power over you, knowing you will try your very best to implement all of their suggestions in order to publish faster. I just think the whole process should be more transparent.
Peer review is important but lets be sure to not conflate it with the established way it operates. There is nothing that says peer review can only happen through submitting to an established Journal that conducts a traditional peer review process. We already know that process isn't perfect so there is no harm in looking to improve or replace that. Does that mean ArXiv as it is works as a replacement? No, and you make some good points why it doesn't.
> but there's no incentive to do the hard work to correct mistakes if your paper is already on arxiv and people are already citing it
Peer reviewing is essential, of course. But note that in most journals reviewers are not getting paid. They have even less incentives to find mistakes in your work than you are. I have no idea how such an utterly broken system emerged, where journals siphon billions of tax money exploiting the work of scientists.
The incentive to freely review others' work is that others will also freely review your work.
This is quite separate than the exploitative aspects of the big scientific publishers. Reviewers are certainly exploited by publishers, because publishers make money from reviewers' free work. But reviewers would do the work for free anyway because that's what they expect others to do.
> I'm only speaking about publishing in journals, because peer review in conferences is a very different beast. In my field, of machine learning and artificial intelligence research, I'd go as far as to say that peer review in conferences is broken, getting published or not is a lottery and you're better off putting your stuff on arxiv: less hassle and more people will read it.
The situation for conferences in CS is basically how it works in journals in other fields. There may be more low-quality submissions to journals, but at the same time, the extreme competitiveness means most high-quality submissions are rejected also. The "solution" has been the appearance of open-access journals which are less concerned with subjective reasons for rejection like "impact", and scale to accept more papers. But they cost thousands of dollars. Open archives are a more democratic solution.
There's actually some incentive: you get to feel smart by telling someone else is wrong. But it is indeed not a very strong one, and there's plenty of other places to act smart on the internet where you'll actually have an audience too!
> But moderators frequently intervene, delaying posting by days or weeks, reclassifying papers or even outright rejecting submissions.
Definitely not my experience. I haven't posted to arXiv in a while now, but I wasn't even aware a moderation queue exists -- none of my papers were delayed, reclassified, or other...
EDIT: I see the article provides a few examples, but then they also provide numbers: "At arXiv, roughly 6 percent of submissions receive a hold, and about 2 percent are rejected." -- 8% of heavy-handed moderation intervention wouldn't qualify as "frequently" IMHO.
The arxiv is a treasure, and such preprint servers not only streamline the academic debate but are also a necessary first step towards a future without for-profit academic publishers.
The arxiv moderators are like human spam filters. And as anyone who dealt with email should know, spam filters are essential for a platform to remain useful. I think the article puts undue emphasis on the few edge cases that invariably arise with any such filtering system.
And being a moderator is a thankless job indeed. The efforts of a moderator are largely focused on the absolute worst papers, and the job is therefore very different from being a referee or editor of a reasonable academic journal. So personally I would also focus elsewhere any efforts at increasing diversity in academic institutions.
I legit don’t get how come there isn’t some government funded publication platform. No other endeavor has as much bang for buck. Arxiv needs $2.5m a year https://arxiv.org/about/reports-financials which is chump change.
Arxiv is a indirect threat to publications. Publications lobby the government and also influence university grants and other bodies.
We still don't have a proper open /free tax filing too from the IRS or tax computed by IRS only for us to verify/confirm as in some of other countries, primarily because intuit (and others) lobbied to make sure the government would never develop one (encoded in law no less), so it is not that surprising that a community benefiting project that threatens some big business revenues has no government funding available.
It's incredibly annoying that publications try to continue to exist in order to act as a gatekeeper to exclusive papers. If journals added real value by peer-reviewing, collating, curating, offering editorial opinion etc then the fact papers are also available in the raw state somewhere else wouldn't much of a threat.
You don't need publications of course, both are competing to own a social network and derive their strength from users that is the threat.
An online ecosystem like Arxiv can very easily supersede a journal if they want to.
For example Arxiv say could deploy a public peer review process i.e. reviews, criticisms and comments on the paper could be available next to paper in Arxiv without a lot of effort technically. Similarly curation and collation could easily taken up for free by community. Any of this would upend journals in quality or throughput easily. There are no problems journals solve that online cannot do better not even exclusivity they charge so much for.
After all there are enough people to moderate reddit or do various quality control tasks in StackOverflow for not much benefit, researchers who curate on Arxiv could easily be found and they may even get a job boost.
Building that community and acceptance is the hard part everything else is relatively trival.
--
Open Access especially for government funded research should be mandatory. Same for software, any thing government invests to develop should be open source.
Funding for government-funded archival projects tends to begin with a bang but gradually fall through the cracks due to the project becoming politically unexciting, frequently with all data just completely lost one day with no warning, so I’d be disinclined to trust any such initiative. The French government made an attempt with HAL[1], but given how INRIA just unceremoniously deleted the software repository Gforge with no backups or redirects (and broke every GCC build out there, because the GCC dependency ISL was hosted on it[2]), I’m not sure it’ll survive once the nationalistic fervor is exhausted. Besides, a country-specific repository seems to be strictly worse than a country-neutral one.
On the other hand, PubMed exists.
My guess is, for a preprint archive, network effects are paramount, and ArXiv was there first. It’s not directly a government project, but is still funded by US government grants and hosted by Stanford (now that the international mirror network is shut down, which does give me a slightly jittery feeling).
If you mean some government should fund arXiv, specifically, then I don't disagree but would guess the government's default position of "it exists without public money, and we think that's rather excellent".
If you're suggesting it as alternative for academic publishing in general, it would cost a lot more.
Arxiv isn't a "publication platform" in the sense that peer-reviewed journals are. The cost of hosting and distributing PDFs is close to zero, obviously. What does cost money is the copyediting and running the peer review process.
People think Elsevier are the worst people in the world (not entirely wrong) and taking in millions while exploiting the free labor of authors and reviewers. But it's relatively easy to quantify the value they do add: their financials are public and, last I checked, they have operating margins of about 30 %.
Let's say we could do without their marketing and billing departments and that gets us to a margin of 50 %. That's pretty good, but it isn't quite the rip-off people sometimes suggest it is. Replacing the commercial publishers with government-funded open-access journals would be a billion-dollar endeavor, not a few million for servers and a bit of software.
Perhaps it's best for governments not to get involved in arxiv. They are heavily involved in most other aspects of academia, and the results haven't been universally pretty.
Because you cannot rely on the rich to fund common goods through voluntary donations. It is not a good idea and never will be. Other than some medical research, charity has never solved a single social ill.
The introductory econ course I once took mentioned the (apparently classic) lighthouse study[1], though now that I did a web search it doesn’t seem that unequivocal. Many 19th century projects were funded through public participation (the Eiffel Tower and the Statue of Liberty come to mind), even if the social value of some of them was questionable (though I think some railroads were built this way too?). Before that, I seem to remember that the Netherlands was made habitable (from essentially a coastal swamp) largely through private enterprise. In more recent times, the late engineer turned entrepreneur Dmitrij Zimin funded a significant portion of the Russian hard sciences in the 2000s and 2010s (after Soros grants got some of the people through the Soviet collapse in the 1990s and then ceased), though the problem is still far from solved.
It is peculiar that the model of fundraising via newspaper ads and a couple of wealthy donors (including in some cases the national government, yes) doesn’t seem to have survived past the world wars, but to say that private funding never resulted in any social good is preposterous. Something has happened in the last century or so, but it’s trickier than that.
Calling a hoogheemraadschap a private enterprise is calling a government one. If it acts like a government it is one even if it is not benefitting everyone equally (still no difference though).
Isn’t it essentially a Corporation in its original definition where they were established for a given purpose (not just “maximize shareholder value” but rather something like “build and maintain some infrastructure” or whatnot)
Discard the last sentence. You're talking in absolute terms and that's leading nowhere. It's been some time, but this is what came to my mind immediately: https://en.wikipedia.org/wiki/Fuggerei
That you can not rely on the rich to put their money in good use with the general public in mind is right. Does it happen? Yes. Does it happen often? Definitely not.
I don't think normal people define "charity" only to mean "charity that doesn't work" though. Giving to a local food bank is charity to most people. And, like, GiveWell/GiveDirectly are NGOs that are actually effective.
Right! And that’s really my point–there are so many people in tech with money and an ego that I’m surprised this isn’t done more. Pineapple person in the most recent memorable example.
Yes, I think we're on the same page here. Your main point was, that you can not rely on the rich to use their money to benefit others and I totally agree.
> Other than some medical research, charity has never solved a single social ill.
This is very broad and incorrect statement that I suggest you research more before stating as fact. Maybe start with libraries in the 19th and 20th century if you don’t know where to start.
Much of the money for startups comes ultimately from pension funds. They have a fiduciary duty to make sure the money is invested in something that is expected to make a return.
Part of the challenge is the reputation issue. Academics want to publish where there is prestige, which means there has to be a perception of the venue as good. This typically either comes from reasonable gatekeeping and high-quality peers (e.g. arXiv) or pushing metrics (e.g. Nature Publishing Group journals). Generating an attractive journal is non-trivial and few people with the appropriate knowledge and expertise take the plunge would do so without some guarantee of profit.
> Academics want to publish where there is prestige
Important to note that this is not some innate "want"; their careers depend on it. It's a means to an ends, the ends being being able to keep practicing science.
(Of course prestige-by-proxy will play a role as well, but it's not what's keeping the system in place.)
Here in the UK, there is a big research evaluation exercise called the REF. The REF is important because it largely determines how the government allocates funds. In order to be eligible to be evaluated, researchers have to essentially submit their pre-prints into open access.
What this has meant for the last 10+ years is that most universities participating in the REF have an e-prints server, like the arxiv. Obviously, being publically funded, universities are then using government money to pay for these costs.
Now your question might be whether it's sensible for the entire country (or research councils) to operate their own e-prints server. I don't know the answer to this.
All I know is that the more bureaucracy you involve, the more bullshit results.
I'm in no way involved in academic research, and I don't now how it works in detail, but a question that popped into my head these days was: is there a place where people can publish papers even without any association with anything? Like, if I manage to do proper research, even without any qualifications on paper, would it still be useful? Does this make any sense?
As others have said, you can post to academic conferences. But each field has different peculiarities and patterns. You would start by doing your research, finding a relevant conference, and writing a paper to match their style.
A more approachable avenue could be blogposts and videos. One good example of a scientific blogpost I recall is here: ( https://dynomight.net/2020/12/15/some-real-data-on-a-DIY-box... ) Someone tested very simple air purifiers with limited experiments. It follows the same "abstract-methods-results-discussion-conclusion" format you might expect. Some obvious caveats, this data could have all been faked, and it isn't peer reviewed for methodological flaws. But you can reproduce the experiments yourself for ~$150.
I am in academia but I like blogposts a lot more. I wish they were more popular. I think HTML makes more sense than PDF, and to some degree they cut the middleman out between communicating to peers and communicating to other academics.
In principle, submission is viable at many venues. PNAS has a few mechanisms for having established authors endorse your submission, for example. The main challenge is going to be the appearance of credibility to get past the starting point, since many non-affiliated submissions tend to be unpublishable without serious rework, at least in lab-based disciplines.
Good research is always useful, where good means something between "reliable and flawlessly explained" and "plausible and very novel".
So on one end you could takes something like Newtons Principa, that synthesised a lot of the known facts and laws at the time into one single coherent framework, and on the other you can have Galileo recognising that he was seeing surface features on the moon and not just patterns on a flat disk and announcing this will some illustrations showing the principle and not what he actually saw.
As to the first part of the question, if you talk of "publishing" in academia you usually mean you want your work to be know by other credible researchers in the hope they find it useful.
The most reliable way to achieve that without credentials is to befriend someone respected in the field you are interested in before you send them your results, because they are more likely to have patience with any errors or confusion in your presentation that way.
There are of course some places that pride themselves on being completely free from filtering, but the concept of "spam folder" along should tell you how good a chance you have of getting any serious attention in those places.
You can submit your work to any conference that is double-blind and get accepted if your paper is good. You don't need to be affiliated to a university.
You can use things like arxivsorter and benty-fields for this. You give them a list of papers you've published, or authors you are interested in and then get recommendations. The recommendations can be up and down voted to improve future recommendations.
I just have a few bookmarked searches for what I am interested in.
I don't see how you could generalize things though across all papers. Like randomly clicking on what was submitted to High Energy Physics - Lattice yesterday. I don't have the first clue what any of that is and that would be the same for most topics for most people I imagine.
Why have you never submitted an article you've found on Arxiv to HN? Are you not finding anything interesting there? (Based on https://news.ycombinator.com/submitted?id=henrydark ... I guess you could use a different account?)
I do a similar thing, but the papers I'm interested in are usually a bit too niche for a general audience. That's one of the things I don't like about academic papers, particularly in my field (astro), we don't tailor our abstracts and introductions to a wider audience.
I wonder if anyone remembers Advogato and its certification/reputation system. It claimed to be very difficult to subvert. I'm sure that was naive and it held up because nobody back then was that motivated to game it, but even today, a little human oversight could probably steady it. So I wonder if something like it could be used for arxiv, relieving some of the endorser/moderation stuff that they have now.
Yes, I bet you could do something interesting there, but one of the lessons is that social graphs can become very noisy, especially if people have an incentive to add edges. Don't expect a purely technical (reputation) system to solve what is a social/cultural problem.
TLDR: 6% of submissions receive a hold, 2% are rejected. The problem, the article says, is that only 13% of the moderators are women and there is not enough diversity of nationality.
My opinion here, but it sounds like instead of a 92% immediate acceptance rate, the author would prefer an immediate acceptance rate closer to 100%. I’m not sure why diversity would get them there, but I am encouraged that the current moderators are able to produce 92% immediate acceptance rate. On the other hand, I am distraught by the suggestion that a highly diverse moderator base would presumably only account for a few percent reduction in holds and rejections. It’s my understanding that that should have a far more significant impact.
So what, you'll say, independent experts can catch your mistake if they read the paper on arxiv and contact you to suggest a correction. Yes, but there's no incentive to do the hard work to correct mistakes if your paper is already on arxiv and people are already citing it (who didn't do their due dilligence). Then errors start cascading. Science is self-correcting, sure, but it's much easier to avoid errors in the first place. That's an important function of modern peer review.
I'm speaking from experience, of course. Peer review has certainly helped improve my work. I didn't enjoy having to redo the work, but the end result was something I could be confident about. Also, I'm only speaking about publishing in journals, because peer review in conferences is a very different beast. In my field, of machine learning and artificial intelligence research, I'd go as far as to say that peer review in conferences is broken, getting published or not is a lottery and you're better off putting your stuff on arxiv: less hassle and more people will read it. You might even see your article posted on HN. When was the last time a paper published on a conference server was posted on HN? See?