Hacker News new | past | comments | ask | show | jobs | submit login
Elsevier investigates hundreds of peer reviewers for manipulating citations (nature.com)
89 points by DanBC on Sept 11, 2019 | hide | past | favorite | 49 comments



This seems kind of bogus to me. Yeah, I can see how someone could construe it as such, but I think there's a perfectly legitimate reason to mention possible citations in the review. If I get a paper that covers something I've written about in the past, I feel perfectly justified saying, "Gosh, this has been published before. Perhaps you should cite this and explain how your work is new."

I do it for other papers and I don't see why it would be wrong to do it for my own. Oh, I can understand how someone can twist it sound nefarious, but that doesn't seem the best solution to me.

More citations to prior work is a good thing. More discussion about what is truly novel is a good thing.

And let's be honest, my salary won't change one iota if I get another few citations. The idea that this is somehow selfish is really bogus. Citations aren't worth that much at all.


This is true. But did you actually read what the study said? They point out that there is a small number of referees who consistently have their own work cited at a much higher rate than the majority of the referees.

If referees and authors are semi-randomly matched, there's no reason a small number of referees should consistently get to review papers that have incorrectly omitted references to their own papers.


>And let's be honest, my salary won't change one iota if I get another few citations. The idea that this is somehow selfish is really bogus. Citations aren't worth that much at all.

I'm not sure if this is true. Citations are the primary currency in academia. It's not about money; it's about prestige.

>More citations to prior work is a good thing. More discussion about what is truly novel is a good thing.

Again I'm not so sure about this. The problem is that most papers are not novel at all. There are some seminal papers in each subject that everyone cites. There are also papers that are closely related to what someone is doing. Everything else is usually a distraction.


That's reasonable. However in many cases reviewers push some very tangentially related paper of theirs that is far from what you describe.


> And let's be honest, my salary won't change one iota if I get another few citations. The idea that this is somehow selfish is really bogus. Citations aren't worth that much at all.

IF you already have a permanent position and many papers. While you are still a young researcher with fewer publications, hiring committees definitely are also interested in whether your papers are highly cited (see also the whole notion of "impact factor" of a journal).


Yes, this is correct.

Additionally, massive cash incentives are being handed out in (at least) China [1]. This reward is given based on the impact factor of the journal, which means that excessive citation of your papers can artificially inflate the IF of the journal and lead to higher payout for your next article(s). It's a bit of a stretch, but it could have an impact if it is done on a massive scale.

[1] https://www.technologyreview.com/s/608266/the-truth-about-ch...


I don't like Elsevier, but I always wondered:

Why aren't there a few meta-reviewers, to review the reviews?

That is, why doesn't the PC chair, or some key reviewers, have a look at the reviews given on papers to make sure that:

- They were respectful - They were to the point - They did not indicate obvious lack of knowledge of the relevant subject matter (this one might be a bit tricky) - They were not self-promoting

That would really have helped a lot of conferences...

(but I actually know the answer: There's a dearth of reviewers as-is, so there's de-motivation to introduce this meta-review mechanism even if it would help review quality.)


> Why aren't there a few meta-reviewers, to review the reviews?

In principle, the associate editor managing the submission (and making the ultimate accept/reject decision) should be doing this for every paper. In practice, the workload is such that editors have an incentive to make the review process as automatic as possible on their end.


So in theory, a highly-trusted reviewer should avoid regular reviews and be engaged in meta-reviews, because of that load.


If the problem relates to the integrity of reviewers, I suspect you can't fundamentally fix the problem by adding more reviewers.


Sure you can: People examining each other's work deter corruption and also have a good chance of noticing it. Only if a reviewer and a meta-reviewer collude to mis-review do you have a problem, and that should be rather rare.


Sure, in theory. Another theoretical but entirely possible situation is you get a black market of colluding citations. Consider this analogy: a company has workers and it wants to ensure they are working efficiently instead of chasing promotions. So they add managers. And managers for the managers. And manager for managers for managers. But in the end, everything is drowning in inefficiencies because everyone is still chasing promotions. Without addressing the root problem, it won't help to add more of the type of individuals who are prone to succumb to perverse incentives.


Unrelated to anything else:

Please try avoiding work with Elsevier journals, and instead prefer open-access not-for-profit ones.

Did you know Elsevier had a 37% operating profit margin last year?


Elsevier has an open-access no-cost publishing option [1].

I understand it's considered inferior to the paywalled version by reviewers, and that Elsevier itself regards it rather like a loss-leader. So indirectly, I suppose Elsevier profits from it. And I don't disagree with the exhortation to avoid Elsevier - it's better for all of us if research (and especially publicly-funded research) can be read by anyone, without having to pay a gatekeeper.

[1] https://www.elsevier.com/en-gb/about/open-science/open-acces...


Here's a possible solution to this problem. I'm still workshopping the idea, so feedback is welcome.

The review process must become more transparent. The reviewers and their assessment of the work must become public when the paper is accepted and published. This in itself will prevent some nefarious citation-begging.

In addition, some guidelines could be mandated concerning citations to be added or removed. For example, for each reference you want the author to include, you must explain why it should be included. Not just "this has been done before" but "the work you describe is similar to the work done on p.4 of [new ref.] but differs in X,Y and Z and is still novel because of invention/method A,B,C. Therefore, you build on their work and should cite them."

To the others in this thread: remember that peer review is often an iterative process and the authors and reviewers have multiple rounds of review in which the authors can choose not to implement some suggested changes, such as adding citations. This is often a small debate. The editor of the journal has the final say and can overrule the decisions of the reviewers (publish / reject) if there are good grounds to so.

Also a small note: the reviewers are selected by the journal editors based on expertise. That means, especially for smaller fields, that many prominent papers are written by the reviewers simply because they are the experts. It is easy to conclude that this leads to bad behavior but it is often the case that there simply are no other papers available with such specific details. Of course, this is not an excuse for malicious actors which are also rampant.


Not sure why you're being downvoted

I disagree on one point specifically though. Reviewers must remain anonymous. What if I harshly review / reject a crap work from the department head of some group at Harvard? then, in anger, I get blackballed by s/he and their friends, which could be the whole field in some cases?


The reviewer is known to the editor, so you already have some accountability there. The reviewer should definitely NOT be known publicly for the same reason that reference letters are usually confidential. You are going to get sugarcoated, bland, self-censored reviews otherwise.

> "the work you describe is similar to the work done on p.4 of [new ref.] but differs in X,Y and Z and is still novel because of invention/method A,B,C. Therefore, you build on their work and should cite them."

That is your job! As a referee I will point out that:

- This is similar to paper X, which you did not cite. You should give credit here.

- You, not me, should argue why your work is new/better etc. and hence worth publishing. If you just write "we do the exact same stuff as in X", then I will assume this is not novel and not that you are lying and it is actually novel and very different from X.


Points taken. Maybe the reviewers should be anonymous, but I still believe that making the reviews public after publication is a good thing (anonymized).

> The reviewer is known to the editor, so you already have some accountability there.

Yes. I think it is not enough though.

> That is your job!

Well, yes it is. But, assuming the author is acting in good faith, it is still possible to miss some references. You can kindly point out the references and ask for clarification as to why the material isn't cited. I may have gone too far by handing the arguments to the authors.


This seems a positive move for science process, as well as for Elsevier's reputation and the value it can add as a publication venue.


I think undoubtedly the reason for "investigating" this now is the optics for Elsevier. They are bad guys in the eyes of most people right now.

The main evidence I have for this claim, is that coercive citations are so common as to be a trope in PhD comics and the like. Moreover it is bleeping obvious to the editor, when they receive a review by M. Ister that suggests 10+ new references to M. Ister et al., that this is going on.


It also de facto deanonymizes the reviewer.


That may be the least of their problems right now


It's good that they're investigating the "insider dealing" varient of citation boosting but I haven't seen anything around excess self or chain-citing, which I always viewed as a more common offense. Obviously follow-up papers are a good thing, but I've known several authors that aim for long chains of follow-up papers (or summarizing several chains) purely for boosting citation numbers. The idea being that if it's a hot topic their paper will be caught in the crossfire regardless of whether it truly merits a citation.


Elsevier has always smelled bad to me.

They published the Andrew Wakefield autism paper for crying out loud.


Well, The Lancet published it - The Lancet is probably the #1 medical scholarly journal bar none. True - The Lancet is owned by Elsevier.

Ten years later, the General Medical Council found that Wakefield's research had been dishonest, and on that basis The Lancet retracted the article. But I don't think it's right to criticise Elsevier over the publication of a paper that wasn't conclusively falsified for another decade.

Reviewers are not supposed to be detectives; it's not their job to expose deliberate fraud. They rely on the material submitted by the author. This is the much same as for auditors; they rely on the documents supplied by the auditee's accounting team, they're not in the business of investigating criminality and blatant lying.

Elsevier smells no worse to me than any other commercial publisher of scholarly journals. To be clear, I think they all have a pretty dodgy whiff about them; but I'm not aware that Elsevier is any worse than the others.


Most papers I've heard of that exhibit malfeasance come from Elsevier and it's my understanding that the journal editors manage the whole peer review process and are compensated by the publisher through some form of an agreement.

And yes, it is the job of reviewers to at least recommend not publishing a paper that can be detected as fraudulent.


I'm actually surprised the number is so low.

With high profile scientists, in specific fields there are often 2 camps pushing 2 different competing solutions to the same problem, and each solution tree has a set of "canonical" references which are cited for most papers in that tree. A set of these is generally requested to be discussed by the opposing team (i.e. why does your paper support your tree and what have you found that seemingly invalidates the other tree?) The more prolific authors in a subfield are generally chosen as reviewers, and also hold the most recent papers in that tree, so requesting citations to recent relevant literature often goes hand in hand with work they've had input on.

Add this to the fact that, even in blind reviews, the reviewer can guess the author's name correctly more than half the time, and the reverse is anecdotally true (this is not as astounding as it seems, in my specific subfield there are maybe 20 people putting out a paper a year -- there was a paper I reviewed where I did not suggest my own work for citation, but the author retroactively added a citation for me, leading me to believe they guessed my identity), and I'm surprised less than 1% "consistently" are cited in papers they review.

If anything, this meta-analysis is heartening, as I would expect this problem to be much more prolific in the "publish-or-perish" (get cited or lose your job) paradigm. Elsevier, in my field, has high impact journals though, and I know the problem to be much more rampant in the lower level journals.

As a slight aside, some countries simultaneously promote local researchers to publish in their own local journals and turn around to note that publications by their scientists have tremendous citation numbers, I personally question the validity of those citation numbers since they seem to be politically incentivized. Within the fields, real researchers will have a feel for what journals are pushing good science and which are "citation padding" with incomplete and hasty work just "published" for numerics.

In response to the meta-review comments. All review is volunteer work, it should be this way, but I personally find it obnoxious when I'm on public wifi and can't even load a paper I've reviewed because some ostentatiously profitable publisher has paywalled it. The research is publicly funded, the researcher pays 100$ per page of publishing cost, the review is volunteer work. Why exactly do you need 30$ for me to read the methods section?


I agree with most of your comment, but why do you believe review should be volunteer work? In lots of non-academic fields you have professionals evaluating the quality of products, and being paid to do so.


"grifters complain about free labor"


Everyone is being told to put faith into "science" as somehow incorruptible, only to have articles like this written. We are supposed to be able to rely on the people writing papers being ethical, when it seems the reality is a race to the bottom like everywhere else in society.


Well, the problem in this case isn't really in the scientific or peer review process itself, it's in using citation count as a metric for paper quality or researcher expertise.

I've had people play this game on several of my papers under review. Reviews will say something like "high quality submission but the authors should consider including work X for completeness" etc where it's obvious that work X was written by the reviewer. As an author, I just found some place to plausibly cite the work and moved on. The quality or validity of my manuscript was not impacted in any way. In my admittedly limited experience, reviewers will rarely green light obviously inferior work just to get an additional citation as it impacts their reputation and it's an inefficient way to stat pad (self-citation or trading citations with your collaborators is by far the most efficient way to pump citation count, especially in the context of review papers). Additionally, obviously inferior works don't even make it to the peer review process; they are rejected by the journal editors in a first pass screen.

There are more insidious games people play that DO impact publication quality, such as requesting friendly reviewers (this mechanism is normally legitimately used to help editors route manuscripts for review to domain experts), submitting work to journals where the PI is an editor / has relationships with editors, attaching well known PIs to manuscripts to add the appearance of legitimacy even though they did no work and were not involved at all, etc. These are problems that won't go away because there are far too many researchers and not enough money / positions to go around, so people are heavily incentivized to play games.


I agree that the general public has basically no idea how the academic world operates and surely their trust in it would decrease if they knew. There are many sketchy practices and a lot is based on an "honor" system, which often devolves to back scratching. Lots of inflated claims for sexy publicity, mass produced PhDs etc.

People imagine science as this serious rigorous thing where everything is objective and unbiased careful "monks" investigate all the details with scrutiny. When actually it's often like a bazaar. You have to sway the reviewer with overinflated claims and "sell yourself" in a way that fully contradicts the scientific virtue of self-skepticism (finding reasons why you're wrong, not right). There's also a lot of who knows/likes who and subjective human factors. These are all obvious for people after a few months or 1-2 years as a researcher, but science has a very different image for laypeople who mostly know about it from school, mostly results that were produced in a different era. The number of scientists has exploded recently, there are tons and tons of papers written and they are all supposed to be some novel discovery and this great leap forward, which is clearly unrealistic at this scale. At the same time thorough analysis and skeptical papers are "boo"-ed down, seen as confrontational and are undervalued compared to sexy new claims.

Many institutions have become paper mills. Raw metrics like paper counts and citation counts have replaced careful consideration in many evaluations, it's basically an industrial scale mechanized process. Feels like a house of cards that can not be kept up indefinitely. The reputation of science will be hurt for sure as the public starts to find out more.


This is an unnerving yet very accurate description of the current state of academia. I would nuance it by saying that some fields are more "competitive" (i.e. crooked) than others and there is, in general, a lot of respect for work done by others (sometimes expressed through mild jealousy).

I share your conclusion but hope that science itself prevails, that this state we're in now is part of the "self-cleansing" property of science and that specific scientists will be held accountable instead of the concept of science. But I can only hope for now.


On the other hand, to list some random newish things:

* Genome sequencing.

* New malaria drugs.

* Graphene.

* Better batteries.

* Deep learning.

* fMRI.

* CRISPR.

From the outside, the public doesn't care about the trials and tribulations of PhDs. Or irrelevant puffed up papers. Or the organisational politics of research.


That's true, there is a lot of good work out there of course.

And I'd say the system is still of concern for the public. Few readers of pop science articles recognize when they are essentially reading a pr marketing piece, and not something like a consensus textbook-like description of consolidated knowledge. The authors have special interests and biases beyond uncovering the truth, such as pressure for publication counts, tenure possibilities, filling your CV, all sorts of political considerations, flag planting, etc.

Another thing is how these discoveries are made. I think people imagine it as if researchers research and ponder about things, then when they find something, they write it up to share it with the world and contribute their new found interesting knowledge. I know someone who did that in his PhD, he quit after several years without any paper submissions. When asked, he said he hasn't found anything yet worth publishing. In most places, people decide in advance that they're going to have a paper this year for this journal/conference and then they plan a solid way there. You decide what result would be sexy (keeping some secondary plans in place) and then do the whole thing. Several times things will not go as you expect, but the paper must be written. Then you rewrite history and come up with ridiculous "storylines" to make it seem like all your hypotheses naturally flow from prior work and everything fits in this nice story. Then you torture the data until it sings because there must be a paper. You look for the smallest indication of something interesting, inflate it to sound like a breakthrough, speculate about far reaching conclusions etc. And then if your university has a good pr department, journalists will eat it all up and you'll be out in the media, impacting normal people's understanding as well.

(I realize I may be a bit too cynical here, and I exaggerate somewhat to get the message through.)


I get that, and you may not be wrong. It's that you may be right about the wrong thing. All this stuff matters a lot if you're pursuing a career in academia. But from the outside it just means the machine of science is a bit less efficient than it could be.

Also I put it to you that serious researchers are rarely misled by low quality, vacuous papers. That kind of work will be ignored by history.

So I agree that what you're pointing out is something of a problem, but may not constitute a cataclysm.


Scientists are human and guidelines dictating acceptable behavior must exists and be enforced; just like any other field of work.

A more open peer-review system would help this issue and expose abusers like this, but it's not clear (to me) how such a system could work... At least this issue does not directly affect the quality of research, but rather corrupt the already noisy quality metric of citation count.


I think open peer-review system would be great. It would also keep reviewers much more careful and accountable for their work.

It is a bit different than design docs/code review in the sense that, there are N different authors competing to grab a place in conference or journal -- so they may try to game the system, while in case of design docs/codereview there is no such competition.


Massive misundertanding here:

Science doesn't work by noble individuals scrupulously following rules that they could ignore if they chose to.

Where science works, it works by incentivising one scientist to tear apart the next scientist's position. There's no better way to avoid having your argument torn apart than for it to be genuinely solid.

This is aptly demonstrated by Nature tearing into Elsevier...

The time to get really worried is when you stop hearing anything like this and get a message only of, 'Everything is fine, nothing to see here'.


Where science works, it works by incentivising one scientist to tear apart the next scientist's position.

This is very true. In my graduate education that came out loud and clear - your job is to try and punch holes in other people’s work. If you find a hole, it’s their job to fill it.

That’s what good science looks like.


Science is incorruptible. Scientists are not. Don't conflate the imperfections of the humans implementing science as being problems with science.

Whatever you're proposing as the alternative to science has the same problem: it's implemented by imperfect humans.

What's your alternative?


> Science is incorruptible

You're taking that a tad too far. Remember, for long periods of time, certain disciplines were considered to be "science" which weren't quite that. It's not just individual personal failings.


Okay, but that's a semantic argument. The word "science" doesn't mean that any more.


Alchemy used to be considered science. Chemistry - the same science - used to have people concerned with ether and phlogiston. Apparently some physicians truly believed that it was aberrant for women to have a large clitoris; and still today, many doctors believe it's medically beneficial to cut off males' foreskins. etc.


Nothing you're saying is a disproof of what I said.

How do you know turning lead into gold isn't possible? Science. How do you know ether and phlogiston aren't real? Science. How do you know it's not aberrant for women to have large clitori? Well "aberrant" is subjective, so that's really not even a scientific claim, so science certainly can't be blamed for that. How do you know it's not medically beneficial to cut off males' foreskins? Science.

You can't accuse science of screwing up those beliefs: science is the only reason you know those beliefs are wrong. These are examples of science working.

This subthread started with me saying "Science is incorruptible. Scientists are not. Don't conflate the imperfections of the humans implementing science as being problems with science." If anything, your examples prove my point. What you've shown is that scientists have believed some wildly inaccurate things in the past, and science has corrected them.


Ah, I wouldn't be so sure. John Ioannidis's work does a great job of showing how science as we've defined it can generate results that are wrong. There are plenty of phenomena that can't be easily sussed out using the scientific method of controlled experimentation. The simplest example is if you need two parts A AND B to generate an effect. If you add A, you get nothing. If you add B, you get nothing. The scientific method suggests that both A and B do nothing. It takes a bit more clever work to recognize that both are required, but the scientific method insists on simplifying the model.

The point is that even science isn't well-defined and we need to be super flexible.


> The scientific method suggests that both A and B do nothing. It takes a bit more clever work to recognize that both are required, but the scientific method insists on simplifying the model.

Huh? No it doesn't. This is complete nonsense.

The scientific method would deal with this by hypothesizing that A combined with B might do something, then testing the hypothesis, and discovering that it does. Generating that hypothesis might be difficult--nobody said science was easy. But there's nothing about the scientific method that can't handle the situation you propose.

There are hundreds of examples of this. Try, for example, hydrochloric acid and baking soda as your A and B--I've never mixed these two and don't know what they produce, but based on my knowledge of acids and baking soda, I hypothesize they will result in CO2, even though neither of these decomposes into that gas by itself.


The difference is with science the truth will eventually out. No one says science is incorruptible, but the process of science means that it generally keeps moving in the correct direction, despite normal human tendencies. The other part is that when science is found to be wrong, scientists change their assumptions based on new information. There's numerous examples of science being used for ill. Light bulbs that burn out faster because of a cabal of bulb makers, functional vaccines for diseases locked up for years unused, and mistakes like radiation poisoning of workers and people because we just didn't understand the ramifications at the time. All of these were eventually corrected by other scientists. Now specific to writing papers, paper writing is required for some educational job progress. The journals are supposed to be ethical, however this problem has been known about and pointed out numerous times by ethical scientists.


... and then they 'll be after networks of ~~bots~~ reviewers , and then networks of networks of bots, and then state actors etc etc. Scientific publishing quickly catching up to twitter and blackhat SEO




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: