Hacker News new | past | comments | ask | show | jobs | submit login
Science papers rarely cited in negative ways (nature.com)
64 points by cryoshon on Oct 28, 2015 | hide | past | favorite | 36 comments



"... the study, led by Alexander Oettl, an economist at the Georgia Institute of Technology in Atlanta"

"... negative citations are most likely to come from other researchers who are ... relatively far in geographical distance ... suggests that social factors — the awkwardness of contradicting someone whom you are likely to bump into in person — may play a part in science’s self-scrutiny."

"Michael Schreiber ... at the Technical University of Chemnitz in Germany ... thinks that the study’s definition of a negative citation might be too broad"

"Ludo Waltman ... at Leiden University in the Netherlands, agrees that the definition of negative citations used by Oettl’s team is broad"

Interesting series of events there.


It's probably not only geographic distance, but also relative size of the subfield that plays a factor -

Ever since my PhD I worked with a relatively minor plant and have been doing so for the last 5 years, I now know all "major" people in my sub-field and they know me, which made finding unbiased reviewers for my thesis quite hard.

I regularly bump into all of them at the same conferences, they are the only ones who can gauge my work for committees etc. Citing them negatively would be career-suicide (well, it depends on the cited person - a few would probably be alright and be happy that their work is being improved, others would deny my existence. I've actually published one paper improving another guy's work and showing his flaws, I've included him as an author to soften the blow, which isn't unethical since he also supplied some data, he did some work).

If I would work in a bigger field (say, HIV) it wouldn't matter so much.


My hunch here is that the statistical tidbit the article mentions is a result of an abundance of negativity held behind closed doors, prior to publication.

Scientists typically present their findings to other scientists pre-and-post literature publication, building relationships among and within research groups. Aside from building connections between researchers, these type of presentations are one of the pillars of the scientific community. The question and answer sessions during and after these presentations frequently quite brutal (though eminently professional, as science is a formal and polite affair), with projects and hypotheses being verbally torn apart with the author on stage to gibber whatever defense they can rally while in the headlights. This happens to everyone, from the greenest grad student to the gnarled director of multiple programs.

These kinds of routine tearing-down have a negative impact on people's psyches (you will find many scientists to be humble and self-effacing) but drastically increase the quality of science handed off for publication. Instead of researcher A giving researcher B a negative citation when his results are contradictory, A may have told B in person or via email that the results B recently presented don't click with their research. This won't stop publication of contradictory results per se, but it will probably force someone to do clarifying experiments-- a good thing that advances the body of knowledge.

A negative citation is probably reserved for people who aren't connected to each other, or who have found a new spin on historical results-- in my guess, probably from the rear frame of what's considered "current" (up to 3 years prior for immunology).


About the most negative, confrontational thing that happens in science is when a journal editor unwittingly picks a referee for a paper from a group that is in direct competition with the authors of the paper, and that referee happens to be a twat. They're supposed to be anonymous, but they often reveal themselves through various idiosyncrasies.

Experimental groups often wind up working on fairly similar things, exchanging personnel, etc.. Sometimes these groups will cooperate on some things and divy up other things so that they're not stepping on each others toes too much. This works when the people in charge are capable of getting along. However, there are usually a few people in any given area that don't play well with others. These are the people that bring the public tear-down drama to conferences and the people who will obstinately race other groups to be the first to publish, even if that means using unethical means, such as filibustering the peer-review process or outright sabotaging it if they're chosen as referees.

Most people are decent, but there are a small number of assholes uniformly distributed everywhere.


The most confrontational thing I see are published comments on published papers. My PhD adviser published a paper, then a guy commented on it, then he wrote a reply to the comment, then the guy wrote a reply to the reply to the comment! The final reply ended up with some pretty crazy stuff in it. Here is my favorite quotes from his reply:

    Or is it the other question, that it is “forbidden” to modify the holy NEB method of Prof. Dr. [Prof. Name]?

    We propose to read the old Hamlet: “There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy.”
Yes. He quoted Hamlet!


Not only quoted, but cited! Priceless :_)

[10] W. Shakespeare, Hamlet, Act 1, scene 5.


I've never heard of the review fillibuster before, that sounds really anti-Science and really shitty.

Science can get super political to be sure... but the public tear-down is part of the scientific process despite frequently being political posturing.


Many journals now allow you to explicitly name researchers you would prefer not to be selected as reviewers for your submitted paper.


Anecdotally, there are cartels of authors that fail to be sufficiently critical in the literature. This is undisputed. However, quite a large (sometime overwhelming) amount of criticism and scientific conflict operates in the daily practice and presentation of science that occurs outside the formal literature. Within the primary literature, variation in results or technical questions about methods are often not cited in a manner that is overtly "negative" from the perspective of a field outsider, but those in the field will understand and recognize the conflicts. These inconsistencies or disagreements in results are most often made explicit in review articles. A broad analysis of this sort is unlikely to grasp these kinds of subtleties in scientific practice.


Very often, one reports that a particular phenomenon or result reported in previously could not be reproduced in a different experiment. This is just reporting that that particular result may be complex and context-specific. This is a negative citation but has no negative connotation, it does not even imply disagreement.

Only in relatively rare cases where an exact experiment is repeated in the same context and the stated result could not be reproduced, it is an attack on the credibility of original work and is truly a 'negative' (as in bad) citation.

The issue here is that the 'negative' used in a scientific sense is exactly that - a 'negative'. In colloquial usage, 'negative' means bad. The headline is just using this semantic difference as clickbait.


What happens is that direct replications are actively avoided so that the possibility of a negative (as in bad) citation does not come up. That appears to be the most popular "solution" to this problem, to the point it has been institutionalized and journals discourage publication of replications, etc.

This effectively turns the area of research into pseudoscience in my mind. I don't see how a method that does not require independent replications can be considered science. I guess you could argue about the definition of science, but any method without replication is very different from the one that has brought great benefit to mankind.


>I don't see how a method that does not require independent replications can be considered science.

There is an implicit benefit-of-doubt here - that the scientists themselves have independently replicated the experiment before publishing the result. Most papers will list how many times an experiment was repeated by providing the sample size, number of biological replicates, number of technical replicates, orthogonal evidence etc.


>"the scientists themselves have independently replicated the experiment"

Is this independent though? One person/group repeating the same experiment is much less convincing evidence of a stable phenomenon and control of the experimental situation. I take independent replication to mean others getting similar results.

Going further, the ideal situation is when there is healthy rivalry (such as between universities), so the multiple research groups have incentive to find flaws with the claims of the others.


The dynamic there is that there's little incentive to replicate results - no journal is going to publish an article based on that. Why spend your time and money on that then?


>"Why spend your time and money on that then?"

Well huac,

That's a problem for those of us who think independent replication is a crucial part of the scientific method.

If no one is going to attempt replicating my research and there are substantial obstacles placed in the way of me replicating other work... why waste my time? It doesn't pay very well, and it seems I am forced into a position of contributing to pseudoscience.

Eventually only people who do not consider independent replication crucial will remain.


Independent replication does occur when someone furthers the research, usually as a preliminary test of current knowledge before diving into new experiments. I personally took great joy when what I had reported was used and confirmed in the preliminary experiments done by other labs for their further studies.

But there is a deeper philosophical issue with your idea of the scientific method :

Consider what is meant by independent replication. Does it mean that the experiment must be repeated from scratch with different material etc. such that it is an independent test from the previous attempt?

Or does it mean that it has to be repeated by different people as well ?

It raises a conundrum. If people are a factor that must be controlled for by this definition, no replication can ever occur since the exact individuals that performed the experiment will have changed by definition during this 'replication' attempt. The skill and knowledge of people is a factor that is notoriously difficult to objectively quantify. This often results in statements such as "X method/assay/procedure does not work well in our hands ".

On the other hand , if people are not a factor, then replication of the experiment by the same group of people will also qualify as an independent replicate.


>"Or does it mean that it has to be repeated by different people as well ?"

Yes. The purpose is to 1) demonstrate the experimental conditions have been mastered to the point they can be communicated effectively, and 2) demonstrate the phenomenon is stable in the face of any unknown factors specific to a place and time.


1. Effective communication of a protocol has no relevance to its efficacy. So that's a plainly unscientific test of the 'truthiness' of anything.

2. No phenomenon ever, scientific or not, can be guaranteed to be stable in the face of all 'unknown' factors. They are unknown, you cannot make any statement about that.

I understand where you are coming from, but that puts you right in the middle of the philosophical quagmire mentioned in my earlier comment. If you mean 'variation in people performing the experiment' by those 'unknown factors' then essentially you are making the assumption that the people are not a factor that is expected to influence the result. As such, even the same team repeating the experiment would suffice. Anyway, you get the point.


It's a time tested heuristic. If a result cannot be communicated well enough for others to replicate or if it strongly depends upon local conditions (doesn't really matter which reason), we should either focus on something else or figure out why. If there is no independent replication, there is no chance to learn either of the above and no reason to have confidence we know what is going on.

I see no quagmire, it is all very straightforward. I will not believe my own results until others replicate them. Even once is not enough to hang my hat on. What alternative approach do you use to judge whether an observation is worth theorizing about?


i don't disagree that independent replication is useful or crucial. the obstacle is one that is intrinsic to academia - 'publish or perish.'

it is logical for a researcher to devote their time and energy to research that will be published, and journals just don't devote space to independent verification efforts.


I guess there is a fundamental dichotomy here between presenting a "method" : "you can do X and Y to measure the value of Z" and presenting a "final result" : "the value of Z for system W is N". In the former case, people will absolutely replicate you, and if your method is bogus you'll be called out on it.


I have some modest experience with telecommunications (engineering) research. In that field negative citations are standard practice to provide motivation for the work you are about to introduce in a new paper. This does not necessarily have any negative connotation: it is merely a reflection of the fact that progress is made by improving on existing results.


When I was a grad student, I was explicitly advised to make such citations uncritical. Instead of saying, "Researchers A, B, and C failed to consider cases X, Y, and Z, which we address here," you would say, "Researchers A, B, and C solved this problem under the assumptions that X, Y, and Z were not issues. We extend their work here to handle these cases." That is, you say what they achieved, rather than what they didn't.

Everyone wins with this approach. Sure, those lazy bums didn't consider cases X, Y, and Z, but then no one else had solved the problem regardless of whether or not they handled X, Y, or Z. That's how A, B, and C's results got published!

You're building on their work. You look better for acknowledging it rather than trying to knock down those who came before. They are likely to be respected quite a bit better than you are. Also, researchers A, B, and C are the most likely candidates to be reviewing your paper. So there's that.

This is, of course, orthogonal to the issue of unreproducible experimental results.


I agree with you. You can and should point out shortcomings of work done by others in a respectful way. Even turn it around, if possible, as your example shows.

Sometimes, however, there is no easy way to avoid saying what is missing. As an example, consider the case when the published research, without giving any explicit justification, does not address one aspect of a problem that the other researcher working in the same area finds important. But even in that case, you should not fail to acknowledge the work that you are building on.

As another comment suggests my interpretation of 'negative citation' is probably less strict. I considered negative even the case of pointing out a shortcoming in a polite way, without questioning the validity of the work in a confrontative manner.


As others have mentioned scientists tend to be pretty humble.

Even when a previous paper was completely wrong (say a chemical structure was incorrect) very rarely do follow-up papers make a negative comment. They usually say "our results differ and this is why we think we're correct".

I think a part of it too is that it's not hard to be fooled by your own results. Even a relatively innocuous error or failure to notice something in the date is really easy to do. I think follow-up papers are less critical since scientists can understand why the original paper was wrong.


But the thing is, those aren't disagreements. A disagreement is a claim that the cited researchers didn't find what they think they found; that essentially some of their results (perhaps all) are false. That does have a negative connotation; the "improvement" consists in uprooting some falsehood.

Improving on engineering is nonconflicting. The existence of an excavator doesn't refute the shovel, and so on.

A disagreement in, say, CS would be, "Foo Barley [42] mistakenly estimates an upper bound of O(n log n) on his algorithm; here we prove, in fact, a quadratic lower bound! Moreover, the algorithm calculates produces results in several cases that Foo Barley doesn't consider, presenting it as general."

How much of that do we see?


To put this into the concrete terms of a story..

I used to be in the NLP group of a good school on the east coast. I was trying to reproduce an algorithm from a paper that won Best Paper from a good school on the west coast. A postdoc from that school said, "oh stay away from that algorithm. they basically cooked the results of that whole paper by cherry picking data. it doesn't really work at all".

At that point I dropped it and never thought about it again -- there is no career value for a phd student (the engine room of research) in "outing" bad research. I was just trying to get some baselines to compare my own work against. On the flip side, getting best paper award at a top conference is the kind of thing that sets you up for faculty interviews at you choice of university.

All that said, academia and the scientific process tend to work well, in aggregate and over time. I think a lot of times people with juicy or disgruntled stories speak more and get remembered more, and it casts academia in far more negative a light than reality. Most of the time, science works well.. just in a way that doesn't produce stories worth posting to boards like HN. So that's my counterbalance.


Open disagreement is indeed rare according to my experience.

EDIT: That may have to do something with peer reviews: they kind of prevent the publication of work that will be openly disagreed with later on.


This doesn't feel like particularly new research - my PhD supervisor did work in the area of argumentative zoning nearly 10 years ago: http://www.cl.cam.ac.uk/~sht25/Project_Index/Citraz_Index.ht...

I initially started my PhD research looking at sentiment analysis on citations, but I found it wasn't a particularly interesting field. As the linked article says, negativity is pretty rare, and it's also something that human annotators disagree about a lot, as it can be expressed in some very subtle ways. The formality of paper publication has a lot to do about it; there are avenues for critical feedback and conflict well before the paper gets published.

I found that looking at how people talk about each other had a much richer depth of expression, although even more nuanced and harder to get annotators to agree about :)


When I write papers, and more generally in all of my conduct within academia, I follow a cardinal rule: Be Nice. I cannot imagine being negative towards another paper in one of my own.

On the other hand, I'm a mathematician, and I very rarely (i.e. never personally) see incorrect papers from others in my field.

I will also add that I've been quite negative when reviewing papers. But that occurs behind closed doors (for good reason --- even good people make dumb mistakes sometimes).


I think that is reasonable. The bulk of citations are used to give background for an article. To show what is already known and what is novel in the specific article.

E.g. "Foobars have previously been shown to be faster to widgets in vacuum[4][6][8]. In this article we examine whether this remains true in normal atmospheric conditions"

that should be more common than

"Foobars have previously been shown to be faster to widgets in vacuum[4][6][8] but those studies probably suck. In this article we examine whether they are faster in normal atmospheric conditions"


Funny how this reminded me of the recent discussion about facebook reactions: https://news.ycombinator.com/item?id=10355556 (with a difference that in this context people would seem to need, not wanting though, a dislike button).

Maybe a system providing private dislike function would yield a balanced solution?


You mean, contacting the authors directly?


Of course, citing someone to say that their results are completely wrong still increases their citation count...


Okay, so 2.4% disagreed. Why is that interesting?

Is that too little? What would a healthy amount of disagreement look like?


But then there is the "grimace index" experiment, where scientists at McGill University in Montreal tortured mice, increasing the amounts of pain to measure facial expressions.

There was a public outcry, but the researchers still got plenty of citations.

http://www.nature.com/news/2010/100509/full/news.2010.228.ht...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: