Hacker News new | past | comments | ask | show | jobs | submit login
Cracking down on research fraud (undark.org)
331 points by apsec112 on July 26, 2020 | hide | past | favorite | 201 comments



These issues of research fraud come up often, but the root of the problem is a bit more subtle.

In North America at least, biomedical research labs operate largely as fiefdoms of the individual principal investigators (PIs). The actual research work falls almost entirely on the backs of grad students and postdoctoral fellows. The grad students need to generate "good" data in order to graduate, the postdocs need the same in order to gain real employment (with only about 10% gaining faculty positions themselves after many years of postdoctoral training). The PIs need such "productivity" from their trainees in order to gain the funding that keeps the labs going. The PIs themselves face success rates in grant applications that are often 10% or lower and, particularly early in their careers, their job security depends almost entirely on their ability to secure grant funding.

These competitive pressures create enormous incentives for otherwise conscientious people, all the way along the hierarchy described above, to fudge their research data. Research fraud is thus a direct outcome of a fundamentally-broken approach to the structure of research funding.

There are exceptions to that approach, however. The not-for-profit Howard Hughes Research Institute [1], and to an extent the intramural research programs of the NIH [2], offer funding for PIs to do what they do best, without the pressure of competing for scant funds. Coincidentally, some of the best science comes out of these sites.

1: https://www.hhmi.org/scientists 2: https://irp.nih.gov/about-us/what-is-the-irp


>labs operate largely as fiefdoms of the individual principal investigators

To add to the injury, you wouldn't believe the extent of the actions to which some PIs become engaged in order to attain/preserve their 'power'. Illegal, unethical and pathetic.

The core problem is that PIs are human, and humans are flawed. The solution (if there is one) should take this into account, and somehow try to reduce it systematically, at an institutional level. The problem is that the ones that make the rules are not going to fight against themselves ... I wish I had something more to add, but that's it, that's the state of the field.


> The core problem is that PIs are human

Actually, the problematic behaviours you describe are encouraged at the institutional level, because institutions also get more funding if they output more/better research. The fact that postdocs don't want to commit scientific suicide for the sake of morals in that context should surprise no one.

The ground truth is that scientific research has been turned into a poorly regulated industry, and we are all poorer for it.

There's a very simple solution to alleviate this pain: be much more selective starting at the undergrad level. But that generates less money, so baaaaad...


>[...] the problematic behaviours you describe are encouraged at the institutional level

Yes, which is why I wrote,

>[...] and somehow try to reduce it systematically, at an institutional level.


This is also why you see announcements of new "breakthroughs" and then nothing further ever happens. The experience is so unpleasant for many people (often compared to slavery) that they leave when they get their degree and never work on that project again. The PI's also get bored and want to do something new to keep building their reputation (so onward to the next grad student or post doc). Patents are sometimes generated, but new ideas are very difficult to take to market successfully so those that can afford to buy or license them do so very carefully (unless they want the patent to attack some competitors product.) In general there is rarely any follow through on new ideas, unless you work on projects with DARPA funding where they insist on "technology transfer", and even then many ideas turn out to be unworkable because they require additional extensive engineering for the practical aspects.


Another variant of the incentive problem is more insidious. Many academics are driven by ego, and specifically a desire to be influential. One way to be influential is to have great, original, and correct ideas. This can be a good incentive, but coupled with our human ability to deceive ourselves, it can become pathological. I’ve seen researchers get convinced that their ideas are so beautiful and right-feeling that they just can’t be wrong, and they will torture the data and run experiment after experiment it appears to work. Sometimes this crosses into outright fraud without the researcher even realizing it——after all, the theory is right, so if the data don’t match it they must be wrong!

This problem seems much harder to fix than research funding, because the question is how to allocate prestige and influence, not just money.


> I’ve seen researchers get convinced

Where have you seen them?


This confirmation bias is everywhere. As in, “this calculation looks off, we should re-run it” or “this number is wrong, let’s call it an outlier”. Sometimes there’s a good reason, sometimes it’s wishful thinking and they are hoping the next one will be more “correct”. Distinguishing the two is not always easy, either.

From my experience, it is checked most of the time before it becomes fraud, though.


Yes, but where is your experience? Academia, industry, STEM, biosciences, the pub?


All of them except biosciences :)

Condensed matter physics in a well-known university. But it does involve working with engineers and quite a lot of pub-going.


Aside: To give readers an idea of the financial pressures I have an example. One of my former professors told me that the lab spaces costs ~$45/sqft/mo from the university, 300sqft minimum. That's ~$160k/year just for the floor space in just the lab, not the office space either. Now, electrical, heat, vivarium, elevators, janitors, security, etc. are all rolled up into that. Bit that's not specific equipment costs, reagents, grad students, and the profssor's own salary. Granted, every university is different (just look at patent ownership clauses).

It's not cheap and you must win grants just to pay rent for your own apartment. Especially early on in your career, it's a real struggle.


I have to disagree with this 'good people gone bad' analysis. None of the research fraud I have witnessed fits this model.


Could you give more details on other causes of research fraud you've seen?


One was a post-doctoral scientist, called X. The lab I was in was asked to take X on by the institute because they had a falling out with another lab, the details of which were not disclosed. A year later, after very poor productivity and various petty issues with other people in the lab, someone checked X's data collection where the original data was available on the instrtument, and found that it was all made up. Subsequently the previous lab investigated, and a paper X published as first author was retracted. Turns out in the previous lab there were also concerns about data fabrication and many other problematic interactions before the fraud was discovered. Also turns out X left another lab before this under similar circumstances.

Another was a well established Professor 'P' with a very large number of published papers, who continued to publish multiple papers at a high rate. There had been doubts for years due to data that seemed too cute and the fact that P always took the data and made the final figures in the paper themselves. P was finally caught out when a paper included data which could not possibly have been collected, and was outright fabricated.

It is worthing nothing that both P and X left the institute without disciplinary action beyond employment termination and signing an NDA. One of them is in another city, getting government grants and doing just fine. Everyone in the know doesn't trust any of their papers of course, but I guess that doesn't include whoever reviews their grants.

None of these are 'good people gone bad'. They are are flawed individuals with no integrity and sociopathic personality traits, and they waste a lot of decent people's time dealing with their actions.


That's the damnable thing. The result in each case is an NDA. These things never go public. It's in no one's interest.

MIT is choked full with people who've signed these sorts of NDAs. You shall not talk about wide-spread research fabrication. You shall not talk about the professors who partied on Epstein's island (except for the few honest ones who came forward; they're ostracized). You shall not talk about corrupt conflicts-of-interest. And so on.

I've heard similar things about Stanford and a few other elite schools, but I have no first-hand knowledge there.

Why is this even allowed?


I agree completely that some people fit that bill; I'd argue however that even for them, the skewed incentives of research success certainly help support that particular approach to fraud. My premise was that most of the people committing research fraud are probably not sociopaths.


The bigger problem is that research fraud leads to research impact which leads to academic hiring.

I think most graduate students come in with pretty good integrity. By the time people are promoted for tenure, you've had multiple filters for sociopathic behavior.


HHMI does fund some of the most enormously successful work in biomedical science, but they start with the very best of the best. It's not clear how much of these labs success is the result of the precise funding structure, although it's obviously helpful to spend less time writing grants (or just have additional resources).

These labs are up for renewal from HHMI every 5 years, and as such, do face the same pressure to continue publishing high-impact work.


HHMI is crazy, it has definitely had a positive impact on my girlfriends PhD experience and allowed her to take on a 'post doc killer' project. It's taken her almost 3 years, but she's gotten great quality data, and just got a great review back from Nature Chem with proposed experiments as expected.

Also, the period can increase as time goes on. Her lab doesn't have to renew for 9 years I think (they renewed last summer), if not 7.

Just my 2c, it seems to me like these funding structures do have a positive impact on the research environment. That said, it's obviously a model that really requires them to select the best of the best labs as you mention, because one org (or even the gov) quite simply can't give blank checks to every lab.


> That said, it's obviously a model that really requires them to select the best of the best labs as you mention...

It seems like it's also a model that makes labs better. What labs will be the "best of the best labs" is not independent from the way their funding sources provide pressure/rewards/punishments to do work in certain ways or do certain kinds of work.


Yes, you are very much correct. Unfortunately, if HHMI and the like are unwilling* to do these sorts of funding experiments, I'm not really sure who will.

* I can't claim to be up-to-date on the distribution of labs funded so for all I know someone is already doing this.


HHMI is not the only one - certainly DARPA is willing to provide large funds for specific areas of interest (though a VERY different funding model). Wellcome trust, the Allen Institute, Broad institute, and others also work with slightly different models that can complement the NIH funding lines.


The topic of funding is a recurrent one in these types of discussions. There are always more people wanting to do science than what is available in funding. There needs to be a process to allocate resources that takes into account competency for the job. Just granting it at random or equally is probably very very far from optimal if it means that an exceptional PI cannot even do the things to push his/her research to the next level. Research funding is a bit like a command economy in that there are no market forces that can be drawn upon to just sort it naturally, so of course it ends up being especially thorny.


> Just granting it at random or equally is probably very very far from optimal if it means that an exceptional PI cannot even do the things to push his/her research to the next level.

What does "far from optimal" mean? Let's pretend the allocators are not god-like decision makers (if they are, let's fire all the scientists because our god-like decision makers can do the research for them).

Let's just accept that if we have a large bunch of diverse potential researchers, and limited funding, the best way to get a breakthrough is by funding a random sample of these researchers.

Maybe this sample should be stratified, but it should not be stratified in such a way that it incentives chasing grants (effectively making the grant-allocators the real PIs - with all their flaws and biases systematically driving the process, but without any accountability or management effort on their behalf).

We can't get optimal without unlimited wisdom, and if we had unlimited wisdom we wouldn't need scientists.


Why do you need unlimited wisdom to rate the merits of the people?


> There needs to be a process to allocate resources that takes into account competency for the job.

Counterpoint: the current system doesn't do this. It optimises for competency to attract funding. This occasionally overlaps wroth competency to perform good research, or competency to perform research well, or competency to oversee, nurture and guide research in others. But that is far from given.


I think there should be two pools of money, one that is granted at random to anyone who has obtained a professorship at a university, let that be the first filter for competency. The second pool of money can be given to projects that have immediate social interest and impact, and the government can, in this case, be very selective as there should be some social weight to succeeding here.

The first pool is for idea generation, failure is an option. The second pool is to direct invention for social benefit.


I guess the first positive outcome of this would be that everybody working at a university would be immediately promoted to be a professor. ;-)


To followup, I believe there's a subtle problem with funding in general, which is that increasing the funding available means that more research positions will be created, which means that eventually grant proposals go back to the same dismal funding rates they were before.


That's an interesting Malthusian argument. I think I mostly agree in a sense that vast majority of PhD students I know had plans of staying in academia but had to join industry precisely because lack of resources. But if there were more resources and they did join academia they would then oversee an even greater number of grad students until we again hit subsistence.


> There are always more people wanting to do science than what is available in funding.

So why not just fund them?

Any process you could create or that currently exists just ends up being gamed by people who understand how to get funding.


There are no ways around that. You cannot feasibly make funding infinite nor do anything about the lack of political will for increasing research funding. Even if funding was to increase dramatically, you'd get people coming up with even more expensive research projects (new particle accelerator anyone?) and there would still be winners and losers of that funding game.

There's also the fact that a PI minting dozens of PhD over their careers, especially in fields with limited industry opportunities, inherently produces a pyramidal scheme that is bound to push people away from science.


Why not?

We seem to have no problem with infinite funding for Department of Defense and to bail out banks. The amount needed for science is paltry by comparison and typically has much better long term return on investment.


It would be far more fair to say that we have no problem with infinite funding for non-discretionary entitlement spending, since that's most of that the Federal Government spends money on. Military spending in the US is not wildly outside of the norm for the developed world as a percentage of GDP, and money spent to "bail out banks" is a rounding error in terms of spending, and has historically turned a profit for the Federal Government. One of the largest sources of research funding in the US is actually the Department of Defense.

As for why research doesn't get more funding, sadly there is a faction in the US government that is opposed to universities and to government spending in general, and views research funding as easy to cut without provoking public outcry among their core voting demographics. This is the same reason that the Department of Defense is now such a large part of research funding, that same voting demographic is far more vocally opposed to reducing military spending.


What is entitlement spending?

These?

"Social Security and Medicare are sometimes called "entitlements," because people meeting relevant eligibility requirements are legally entitled to benefits, although most pay taxes into these programs throughout their working lives."

https://en.wikipedia.org/wiki/Expenditures_in_the_United_Sta...


Pretty much, “entitlement” isn’t a technical term but it refers to government spending on pre-defined benefit programs. A better term above might have been Non-discretionary Spending, but most of it is entitlement spending. And yes, Social Security and Medicare are the two most significant components, followed by Medicaid as the third. Those three plus interest on debt constitute the majority of the federal budget.


For sure. I was just trying to be sneaky and point out that we pay taxes for our "entitlements" so even though it's an accepted term, I think it's an inaccurate one and implies things about the programs people call "entitlement" programs which aren't true.


The funding for the military is far from infinite, and the US is actually pretty "privileged" as far as how much money trickles down to grants for seemingly unrelated stuff through DARPA and other similar agencies.

The discussion on how much money should be allocated to research is a different one, and it's always going to be finite. In the end, funds for research as conducted in universities have to be extracted from societal economic outputs, be it from taxes or otherwise, which is sadly not infinite.


You don't need to make funding infinite, in order to meet the current demand for it.

And increasing funding will not increase demand in lockstep. There are indeed more people who want to do science then there is money allocated to pay them, but the number of those people is not going to vastly increase if 'researcher' continues to be a job that requires years of post-secondary education, has long, shitty hours, and pays peanuts.


Well if the goal is to fund everyone, the pressures that ensure it “ requires years of post-secondary education, has long, shitty hours, and pays peanuts” will likely reduce.


> with only about 10% gaining faculty positions themselves after many years of postdoctoral training

There are no (steady) places for everyone, sadly. That's the way things are, where they demand postdocs to be extra productive, promising a permanent position somewhere (or a letter of recommendation, whatever).

Currently, the postdoc 'experience' has multiple names: research associate, research fellow, research engineer like it is a 'career' to pursue, with minimal benefits, salary attached to research projects (so it could stop at the end), etc.


Rest assured it’s not just North America.

https://forbetterscience.com/2020/03/26/chloroquine-genius-d...


I agree with the subtle point behind the fraud crisis: misaligned incentives.

However, that doesn't excuse the individuals doing it. "Because jobs" is a terrible ethical excuse.

Nevertheless, you have an interesting point there.


What do you suggest as an alternative?


This entire thread is really frustrating for me to read. A lot of these ideas are one-liners from people who have had very little experience with different parts of academia, except maybe being a student. Every single idea here are debated to death by academics already (not just university academics, but funding agencies, academic societies, award panels, journal editors and conference organizers), with changes happening all the time.

If it was this simple that a random person here could come up with how to "solve" academia, we'd have already done it decades ago. The ideas also lack nuance and when you get into the definitions of things (for example, p-hacking), then things become a lot more grey; are you allowed to look at a dataset that you spent 2 years collecting if your first hypothesis does not pan out? The clear cut cases are obvious to everyone, it's the grey area that takes 99% of the time to figure out.

Imagine reading a thread where everyone is proposing "solutions" to software development. It'd go something like "software development is a cesspool and 80% of it fails (see voting systems, MySpace, electronic health records, Theranos. Here's what software companies need to do:" [yes I'm being intentionally stupid to demonstrate how annoying this is]

1) stop releasing before the bugs are fixed. Software and games are rushed out. Companies need to take their time to fix the bugs so the users don't have to encounter them.

2) no more technical debt. Programmers are sloppy and introduce technical debt because they are not incentivized to do high quality programs. [yes, see how triggering that is]

3) cap team sizes. Everyone knows that large teams fail more spectacularly. Gmail and Napster and the original version of Google were made by a group of 4 people. Software teams need to be 4-5 people max.

4) programmers must use a transparency scorecard. Software companies like Oracle and IBM charge ridiculous amounts for their work. They hide costs and cut corners. Programmers should be transparent about the work they are doing each day, what data they access, and which functions they are writing.

These changes need to happen. derp derp


Not sure who downvoted this, but this is 100% on the money in my experience.

People here are writing comments like that funding should be tied to "how systematic, logical, and well-documented the research is" as though these things are correlated to the existing criteria at all.

Much of the criticism here is like someone who knows how to build roads criticising agile software development because it makes it look like no one knows what they are going to build in the end. It's frustrating, and wrong, but the errors are subtle and aggregative.


Yeah I have no idea how it got to -2, but maybe it was my tone. I was even trying to not call out any specific post or debate any single issue.

The thing about academia is it's full of people who love talking about ways to make it better, and rotate into positions of power where they can change things after a few years.

The solutions are complex, require convincing many different stakeholders (even if they're amenable to the change), nailing lots of detail to make it work right. Because peoples lives and careers are on the line. Reputations of entire fields, the way medical discoveries happen, billions of dollars of taxpayer money, major institutions, etc. are not things you want to hack and discover that whoops, you just incentivized the wrong thing and set back cancer research for a decade.


So, what's the best proposal or line of thinking coming from all this deep thought about how to solve it?

Because you can't possibly be saying that the current system is the best we can do, or that the problem is intractable.

(I agree with most of your proposals about software engineering btw. We should do more of this).


It'd be a process: participant in a community in a subfield, listen to ideas that have been tried and outcomes, come up with an idea using those lessons learned and success metrics, build consensus around a new idea, test in a single-instance (like one conference, grant review panel, tenure committee at one university), share lessons to the field, do this for a couple of years to show clear success, expand to multiple events in the field, become an exemplar field for that idea and "infect" other fields

Sounds slow but there's thousands of such experiments happening simultaneously right now. This is how a long of major field-sized changes have happened, like the transition to conferences from journals (which had many initial problems like during tenure review or a lack of quality in reviews), etc. Ideas will lose traction at various stages (for example, there was a movement some time ago to use alpha=0.001 instead of alpha=0.05 for null hypothesis testing, which has been limited to that field or subfield).


Interesting, thanks.

Is the move to pre-publishing servers (like arxiv) a part of this? How does SciHub (and similar) figure into it?

It does sound slow, and a bit trivial, if I'm honest. Are there any examples you can share of successful experiments that have travelled across field boundaries?


You're right. Love your example.

HN does not have much actual collective expertise in this area (say: governance structures for technical research), but HN is solution-oriented, so ideas will be proposed. They just tend to be not very good. (See also: HN climate-studies threads.)

I've spent 10 or 15 minutes on this thread, and didn't read any comments actually building on what the OP said. OP's author does have expertise in this - he has been writing a series of good stories for Science magazine in this general area. Sigh.


Thank you for writing this. Your examples were lethargic. I know I should stay away from the 'academia is broken duh duh' threads here but I get sucked into them anyway. They're always full of armchair experts who don't know anything about universities or research except from being students or at best junior researchers and then think this makes them experts on the matter.

But the article itself had that same vibe for me, especially with how cock sure it is that the 'questionable research practices' are 'fraud'. I mean, come on - someone making up the responses of a 500 people questionnaire; yes that I could call 'fraud'. But not being included as an author on a paper, or being included when you didn't contribute that much? I've been in both situations and in some cases, I was completely fine with it; in others I was a bit miffed (mostly because of the same interpersonal frictions that happen everywhere where people work together) - but in none of those I would call it anywhere near 'fraud' or even 'dishonest'. Yes, there exist people who pay 1 or 2 people to write papers for them and then publish those papers with themselves as the only author. Again, that I would call 'fraud'. But the 99.9% of other cases - not even close. Just like because there is one billionaire underage sex trafficer, doesn't mean all of them are and that 'the system' is 'broken'.

And this is how I, again, got sucked into a completely non-productive 'discussion' that is so far removed from reality so as to be completely irrelevant anyway...


> But not being included as an author on a paper, or being included when you didn't contribute that much? I've been in both situations and in some cases, I was completely fine with it; in others I was a bit miffed (mostly because of the same interpersonal frictions that happen everywhere where people work together) - but in none of those I would call it anywhere near 'fraud' or even 'dishonest'.

As I understand it, ghosting becomes a more significant issue when it enables the omitted author to peer-review their co-authors' papers (and vice-versa) without disclosure of the conflict of interest.


shrug Sure, theoretically, but someone who wants to ensure a positive review can just as easily collude with someone who didn't do any of the work. And then still - yes it's possible that a completely bogus paper gets signed off on by a dishonest conspirator, but the editor should see the discrepancy between that and the reviews of the other authors, and dig deeper. But the real problems start in the grey zone; like when you're on the fence between 'reject' and 'major revisions'. There is no 'objective' truth there and quite honestly as an author it's a crap shoot and always has been. It sucks the first few times it happens but none of leads to 'peer review being broken'.


To add to this - there are plenty of issues with peer review, but someone deliberately avoiding an authorship so they can peer review seems low down the list of real world problems.


> If it was this simple that a random person here could come up with how to "solve" academia, we'd have already done it decades ago.

Just because an idea is simple to come up with, doesn't mean it's also easy to implement.

Also, can you explain to me how your programming analogy works? Because I agree with a lot of it, though it seems I'm not supposed to.


Sure I guess I didn't make that very clear. Just going down the line:

1) stop releasing before the bugs are fixed. Software and games are rushed out. Companies need to take their time to fix the bugs so the users don't have to encounter them.

- This is a tradeoff between shipping time and bugs. You will never fix all bugs, so it's unrealistic. And it's a business decision in many cases. Even the definition of a bug is tricky, like is a usability issue a bug? So it's just a naive idea.

2) no more technical debt. Programmers are sloppy and introduce technical debt because they are not incentivized to do high quality programs.

- No one wants to create technical debt. Obviously it slows down development later on. Again this could be a business decision. Some programs like one-off data science scripts don't need to fix all their technical debt. Technical debt also accrues naturally (like just changing environment, platform, standards) so it's not possible to aim to not have debt in the very beginning. Hindsight is 20-20 and all that. Saying programmers are not incentivized to do high quality programs is just a blanket naive statement, and depends on the definition of high quality programs.

3) cap team sizes. Everyone knows that large teams fail more spectacularly. Gmail and Napster and the original version of Google were made by a group of 4 people. Software teams need to be 4-5 people max.

- Depends on the type of software. Can't just generalize given a few token examples. Expectations also change over the course of the product.

4) programmers must use a transparency scorecard. Software companies like Oracle and IBM charge ridiculous amounts for their work. They hide costs and cut corners. Programmers should be transparent about the work they are doing each day, what data they access, and which functions they are writing.

- This uses one subsection of the software economy to make a point (as a fallacy). But also some of these measures don't make sense, like some programmers read a lot of code or delete lines, and so the metrics are not generalizable.

In summary, these are ideas that someone who has not done long-term software development would say, or someone who has only had experience with one type of software or company would say. They're not well defined, not generalizable, and don't account for the complex and varied sociotechnical process that software development is.


1) I think the idea of funding "people, not projects" (the Howard Hughes Research Institute's byline; I'm not affiliated with them in any way) is a great one. It would probably involve changing the structure of research programs in other fundamental ways, such as by

2) Limiting the number of graduate students. The current relatively-low barrier of entry into grad school provides cheap, motivated labor for PIs who are trying to stretch their research dollars to the limit. The outcome today is far more PhD graduates than jobs, and for those who get jobs, far too many of them for the research funds that are available. The overall quality of research drops when PIs focus what is fundable, rather than what is important.

3) Shifting the burden of the bulk of research work from trainees to salaried research associates/assistants/lab techs.

4) Change the focus of research output from novelty and volume (of papers published) to quality and significance of work done. Good research is often slow and careful, and doesn't fit well with the demands of grant funding agencies.

Fewer grad students means more resources available for their training, and better career prospects. Shifting the bulk of the research benchwork to salaried professionals removes the incentive to commit fraud. None of this has to come at the expense of research quality; as an example, the NIH/HHMI already have approaches to vetting research programs for quality, even if the PIs aren't competing for grant funding.


> Limiting the number of graduate students.

Isn't a lot of the graduate school product providing a way for people to immigrate to the United States?

The economy around it is pretty complicated and definitely there's way more pressure to supply more, not fewer, spots.


No, the problem is a bit more structural then that.

A single professor trains dozens of grad students over the course of their career. Even if you remove every single foreigner, and every single person who does not want to pursue a career in academia, you still end up with dozens of graduates - per professor.

The number of jobs available for those graduates?

One - that professor's - when he or she retires. And until then, they get to burn the midnight oil, doing grunt work on their projects, in the hopes of competing their dozen collegaues for a shot at that one spot.


> The number of jobs available for those graduates? One - that professor's - when he or she retires.

This simply isn't true. Remaining in academia is only one of many career paths for graduate students, and many are financially compensated much better.

Some examples of these include physics and math grad students being recruited by hedge funds, comp sci grad students being recruited by tech companies and geology grad students being recruited by oil, gas and resource companies.

Even within academia the size of the market isn't static. New and emerging universities are hungry for qualified research professors and many come from these programs.


I already excluded the people who are getting a degree because they want to go to industry with it.

Of the 'stay-in-academia' camp, the ratio is hideously stacked against the graduates.

New and emerging universities aren't doubling the academic jobs pool every five years... Which is what it would take to keep up with the graduates being churned out.

(And incidentally, they are also contributing to the oversupply, because their professors will also be churning out even more new graduates.)


Your second and third suggestions will literally kill all research at anything but the most prestigious universities. It is elitism. Graduate students are not always RAs, they're usually not. They often teach, so you're not giving an accurate depiction of the incentives placed on them.

Becoming a graduate student is not easy, the bar is not relatively low, and making it through the program is even harder. It's true that academic positions are not large enough but you're ignoring the private sector.

Research scientists could not possibly be paid enough by universities. These would be graduate students minus the mentorship. A worse of all worlds.


Research publication, funding, and jobs should not be tied to outcomes of research.

They should be tied to how systematic, logical, and well-documented the research is. We need a system wide change from funding bodies, job committees to publishing criteria.


Along the same lines, “what question to ask” or “what idea to try” should never generate credit or fame for individuals. This incentivizes Indiana Jones style research, results at all costs.

As a researcher you should be desiring to get jobs or rewards based solely on the care of methodology, clarity of communication and ease of reproducibility.

If you do those things well and the papers turn up negative results, that’s good archived knowledge for society. Turning up positive results should be viewed as an emergent property of a wide network of labs, agencies, universities and governments, and never a property of darling individuals.


It can be argued that part of being a good researcher is developing an intuition and taste for interesting and promising research questions. Of course, the counterpart to this is that you see hype and fads developing very easily. But overall it is very easy to come up with experiments that any experienced researcher would have told you beforehand had no chance of producing interesting / impactful / positive results. The system must discriminates against researchers that are able to come up with interesting (positive) results with greater probability, since it represents every time an expense of finite resources. If your only contribution as a researcher is to be able to make conscientious and well realized experiments, then sadly your place is much more as an assistant than a PI.

I've never seen it argued in serious research circles that grants should be given this way, but I did find the non-research community to be overly obsessed over negative results not being published or rewarded. It is simply a consequence of the set of possible experiments being infinite, the same way you can come up with an infinite number of startup ideas, but not all of them are equally good even before implementation. Should we reward startup founders with failed startups because they tried their best and really did the best they could given the circumstances?


The comparison with startups isn’t valid - they seek an explicit profit outcome where good or bad ideas are judged by success in markets. Research is meant to be a public good, that furthers society’s at large repository of knowledge and culture.

> “ It can be argued that part of being a good researcher is developing an intuition and taste for interesting and promising research questions.”

I don’t think this can be argued actually. This is mythology, usually applied to creditmongers who run labs and accumulate accolades that are actually due to a wide array of students and post-docs who are made to get reduced credit as a type of dues paying laced with rampant discrimination and sexism.

I’d say mythologizing the idea of a crack sleuth who has a special knack for research intuition is extremely harmful on all fronts: depriving value to society because it’s false and depriving value to the wide network of lower level staff who are actually responsible for progress.


Estimating from my own reading, 1% of research is fraud and 80% is worthless for other reasons. Numbers vary between fields.

If that's the case, what's the argument for why we should spend time doing something about the 1%? Solving 100% of the 1% wouldn't change the overall situation much.

Possible arguments include:

- Fixing the 80% is hard, but fixing the 1% is satisfying (to the aggressively conventional-minded, at least.)

- The 1% is wrong in a more harmful way than the 80%. Perhaps falsifying data is worse than hand-waving conclusions.

So if the maximum upside is 1% of wrong research removed, and the downside is quenching some fraction of the good 19%, it's probably better to leave it alone.


The author is arguing for an expansive definition of fraud that includes things like p-hacking. Probably a lot more than 1% of research is p-hacked; that general category of problem might even account for a majority of your 80%.


I'm guessing throwing out "bad data" happens in a lot more than 1% of cases as well.


I think the disproportionate harm argument is a big deal. A single high-profile case of fraud undermines trust in the entire field, and that’s critical to maintain.


I don't think that's true. There have been frauds in materials science, but people still trust it. The fields that are untrusted are the ones that are so hard that it's plausible that the conventional wisdom might be wrong even if everyone is honest.


It's hard to know that a worthless research paper will always be worthless.


Also, who decides what is worthless?


>1% of research is fraud

You are way, waaaaaay underestimating the extent of the problem.


I think you didn't understand the grandparent poster's point. They're arguing, basically, that only 1% of research is deliberate fraud, but 80% is simply useless garbage. The reasons for the garbage are varied, but any way you slice it, it's worthless. So why worry so much about the maliciously useless when there's so much incompetent uselessness?

That matches my experience back when I was an active researcher. Intentional fraud is rare (and highly damaging). Garbage research and garbage papers are everywhere. And even most of the non-garbage research is useless! The core differentiator there is that we just don't know which things will be useful without hindsight. But we can identify plenty of things that were not helpful to anyone and we knew it... yet we still funded those programs.

I think we're mostly in agreement that the size of the problem is vast, even if we might disagree on labelling.


Perhaps I was in an extremely corrupt environment (which eventually made me leave), but I would put the "intentional fraud" rate as high as 30% or so. I pointed out one of such instances once and that was the beginning of the end for me in that place.

More: My field was Biology. No one double checks anyone or anything. A measurement comes back and you can just pretty much ignore it, in the end you just write whatever you need for your thesis to hold (I mean not me, but that's pretty much everybody's dirty secret). In the rare case that someone actually wants to check it out (1 in 1,000), you can just say that the "original data was lost". Sometimes the only thing you can trace back is some sort of written journal, in which anybody could write anything and that doesn't make it true/false.

Not long ago, there was a group of scientists that went on to try to replicate some landmark results on cancer research (I'll update with a good reference if I find it) and found out that only a very low number of them matched their observations. Some of those were not even close ... like, waaaay outside a very generous experimental margin of error.


Ouch. Things certainly do vary from field to field and I was in one of the better ones (particle physics), so I believe you.

Given how hard an environment it is even in the honest research groups, that must have been awful.


> I pointed out one of such instances once and that was the beginning of the end for me in that place.

How does that occur? Did they invent some reason you couldn't remain? Were you intending to remain anonymous when reported it? Lastly, were there consequences for the fraudsters?


Power games.

In my specific scenario, she said I was underperforming and didn't want me anymore as her student, there's nothing much one could do after that.

Of course I tried to defend my stance, even provided solid evidence of her wrongdoings and overall attitude towards me and other students. In the end it was "just better if you just leave".

>Were there consequences for the fraudsters?

Zero.


The root of the problem is that we allow people with high esteemed credentials to run society. They jumped through a hoop.

"You got straight As at 14 - 18 years old and got into an ivy league school as a result? Here run this venture fund."

"You got a PhD in Economics with a good publication by P-hacking your secret data? Here take a run at the FED with power over the US Economy. Your Phd shows that you are the man for the job."

"You got a PhD in ML by making some incremental improvement on some already existing model and then doing massive hyper parameter tuning? Here, become director of research at this big corporation."

Research will never be fully productive in this system, there are too many people who have too much to gain from gaming the publication system.


All three of those examples are missing a step of like 5-20 years in the middle. Nobody is running a venture fund as a 22 year old ivy grad (barring someone who's taking over for their family member, but in that case it has nothing to do with being an ivy grad), nobody is running the FED right out of a PhD, nobody is a director at a big corporation right out of a PhD.

Presumably once those people are in such high-power positions, they also have a track record of real accomplishments behind them; it's certainly possible they've lied and cheated their entire career but it's definitely less likely they'll make it that far that way.


Part of the point is that the gap in the middle often doesn't matter. I've hung around the Stanford crowd and as I enter my thirties I am awestruck by the opportunities they have regardless of what they have done since graduation. The credential is almost all that matters, as long as they dont mess up massively.

A common path is:

Graduate Stanford => Work at Mckinsey => Get hired into VC.

Graduate Stanford => Raise 15 Million dollar series A with your buddy => Doesn't matter what happens you will end up rich.

It's not a cynical opinion, I have seen it myself and I am doing very well for myself. I worked under an Economics PhD who was the number one in his class (top 5 phd program) and graduated top of his undergrad at UPenn. His incompetence relative to his credentials shattered my respect for the way that we allocate positions of power


>The credential is almost all that matters, as long as they dont mess up massively.

Maybe the west coast is different but everyone I know (including a fair number of people who went to Harvard) who's in any position of power didn't have "tool around doing whatever and don't F up" as their career summary until that point. They had success upon success. While some of them didn't necessarily choose big grand things to spend that time doing, they were all successful at it. Nobody just went with the flow.

The "not bad but not great either" people who were just lucky enough to have opportunity early on but don't have the skill to keep turning the opportunity into success (or just want to chill and raise a family) tend to seem shoehorned onto career paths that dead end somewhere in middle management.


I'm familiar with this path as well, and have myself also probably benefited significantly from a slightly lower tier version of it.

I think the issue is more that we have a lot of positive feedback cycles/signal boosting that occur based on early career opportunities. I don't think research fraud is necessarily part of it though, because once you're accepted into a PhD program at a top program you're probably going to graduate anyway; same with undergrad. But the signal boosting is very real, and I find it especially problematic given how much luck there is in things like getting into a certain college.


Of every 2,000 graduating seniors at Stanford every year, far less than 1% end up in top consulting companies. Even fewer raise a Series A 2-3 years out of school. I am personally not so shocked that these individuals end up succeeding at other social games. They have a track record of doing so.


What about GSB? I think 20% of their students hail from MBB, same as with HBS.

I'd imagine that these to-be MBA-holders will seek positions at less grind-y places. Would not surprise me if a lot of them went to join VC firms, or product manager jobs at larger firms, before VC.


I think most people who raise series A and then fail don't get rich. I've certainly known people who raised more than that, couldn't make the business work and then wound up personally bankrupt.


Possibly, but when you raise it at 24 or 25 years old it looks extremely impressive and sets you up for upper management at other companies.


...or they "got operating experience" and end up a partner at a VC firm.


That might have more to do with the subject than the individual.


That some aspects of the self-fulfilling nepotistic bureaucracies are meritocratic begs the question if those meritocratic elements only exist to justify the nepotism, and indirectly, the gatekeeping meritocracy.


> it's certainly possible they've lied and cheated their entire career but it's definitely less likely they'll make it that far that way.

The number of people who make "correct, active decisions" is vanishingly small.

It's less they've cheated than "Did you really make correct decisions or did your coin just come up heads 8 times in a row?" Other people may have been as smart or smarter, but if the coin flip went tails, they get politically hammered.

Success has a large part of "survivor bias" to it.


See sibling comment. I agree there is a lot of survivorship bias but the reality is more like the first coin flip has a 25% chance and all the subsequent ones are closer to like 80-90% because there are a lot of positive feedback mechanisms you can take advantage of to maximize early career advantages


https://en.wikipedia.org/wiki/Peter_R._Orszag (NOTE: NOT suggesting he cheated, just noting that lots of people have fast-tracks pre-planned for them w/o the hassle of working their way up.)

Not quite running the FED, but look at the years, pretty close.

Orszag earned an A.B. summa cum laude in economics from Princeton University in 1991 after completing an 80-page long senior thesis titled "Congressional Oversight of the Federal Reserve: Empirical and Theoretical Perspectives."[11] He then received a M.Sc. (1992) and a Ph.D. (1997) in economics from the London School of Economics.

He served as Special Assistant to the President for Economic Policy (1997–1998), and as Senior Economist and Senior Adviser on the Council of Economic Advisers (1995–1996) during the Clinton administration. Director of the OMB by 2008.


Assistants to the president are often these fresh out of school types.

Obamas speechwriter were in their early 20s as well for instance.

The Fed is way more stringent - - a fresh PhD would be an analyst or research associate, nothing more.

State departments and organizations have a lot more hierarchy for better or worse.


He was "Senior Advisor" before even graduating. He was a WH Director within 10yrs. Three years later, he was Vice Chairman of a global bank (Citigroup).


Likely they will also fraud the 5-20 years in the middle. These corporations are also highly susceptible to fraud.


This is absolutely not the reality of a PhD. On what planet do people with those backgrounds get those positions out of their PhDs? What's the basis for this entire comment?


The basis is spending 15 years in NYC high finance and having multiple close groups of friends who graduated from Stanford, work in tech, raised VC, etc


You have 15 years of finance experience as you “enter your thirties”?


You got me, poor wording. My father worked at a very high level in influential investment funds, I grew up around the industry and understand how it works. I will delete this later because its not something I want to advertise


It sounds like you have a background and exposure to people who might have many enabling success factors. Shiny credentials might be one but social networks are another. Collectively, they might be contributing to the success you suggest is not deserved, but I’m not sure that’s an obvious argument. People of these backgrounds could probably game any gating ritual.

Perhaps we should focus on drawing more actual talent rather than excluding the phonies (even at the highest levels, status and influence isn’t zero sum in my experience).


I think it is correct that on the surface it is credentials. I also think it is correct that the underlying property is social networks and a particular culture / mindset.

Ultimately, I think the "phonies" need to be addressed directly because I believe they are effectively a cabal.

I also think it requires the broader culture to take active steps to help make this happen: work harder to think about what bad behavior is and take action to avoid / penalize it.

FWIW, I have traveled in the finance / startup circles and kept looking for "better places" but have come to the conclusion that they are few and far between and the issue is the business culture and the broader culture that celebrates it.


Isn't there a time limit on deleting / editing comments?


And the cherry-on-top: Once you are part of a research unit at big corporation, your annual bonus depends on your number of publications. Or why the pursuit of incremental but well marketed results never ends. Or why papers with 6+ authors among which only 1-2 substantially contributed is the new normal.


Thanks for pointing out the problem with credentials.

Name a better indicator then for being competent.


Industry kinda already has. Products launched. Teams managed. A candidate's publication record is a conversation starter during interviews, but I feel like years of experience at a company working on launching a successful project is a much more valuable currency.


> Industry kinda already has. Products launched.

This suggestion is so detached from both the problem and the very nature of academia that it's straight-out laughable.

I mean, what do product launches have to do building knowledge on the state of the art, identifying a novel idea, doing the iterative work to refine the idea, and finally document it to the public? Product launches at best require you to manage people and expectations. Do you honestly believe that a guy who launched a product is more qualified to drive science forward than a PhD with an outstanding academic track record just because your PM had a knack to cut corners, descope requirements, and pass the buck to underlings? Because that's the bulk of the job of all PMs I ever met, including from FANGs.

Your suggestion is the poster child of the old mantra "if the only tool you have is a hammer you tend to see every problem as a nail".


The private sector does do research, and for RoI /it actually has to work/.

Academia is a cesspit of politics and dishonesty. Forming the most effective cartels, sensationalizing your results, and being well networked with other academics is not aligned with getting results on actual problems.


> The private sector does do research, and for RoI /it actually has to work/.

The private sector does research with researchers, not product managers.

The process is exactly the same. It's not a public vs private thing. It's a research vs production thing. Research is open-ended and iterative and exploratory. Product design is close-ended, focused, and with hard requirements. Research has zero to do with product management, and you don't change the nature of the problem by pretending that a scientific discovery is a product expecting to be launched.

> Academia is a cesspit of politics and dishonesty.

Oh awesome. Have you ever did any corporate work? Because if you believe that academia suffers from this problem but corporations don't then I have a few bridges I'd like to sell you.

At least in academia you do need to have your publications to back you up. In corporate environments all you have is the cesspool part.


Many companies now hire PhDs to do production work, and it ends up being a complete disaster. There is not much need for pure researchers in most of industry.

And in my experience teams with these academic data "scientists" are far worse cesspools than normal engineering teams who deliver actual products and value.

Also, what are publications supposed to show? They are often a negative indicator of actual capability. Ever interviewed a data scientist who looks good on paper with tons of publications who can't even write a for loop? I have.


I agree that business isn't really the right model for academics but the problems imho is that what an "outstanding academic track record is" has become so ambiguous that it's practically meaningless -- and I say this as a former tenured prof at an R1 institution.

I could write a book about this stuff. The stories I could tell about what's behind those "outstanding academic track records"...

The problem, if anything, is trying to apply a business model to academics, equating research quality with federal grant dollars, taking away real intellectual freedom protections, and then ignoring all the ponzi scheming and exploitation that occurs. Everyone has their heads in the sand, knows academics (at least biomedical research) is full of BS, and just goes on pretending like it's not because no one knows of a good alternative, or doesn't have the courage or power to change things.

What's funny [sad?] to me is that your description of PMs sounds exactly like the most credentialed, accomplished researchers I know on paper.

It's interesting to me regarding some of the examples in the linked piece. Ghostwriting reviews, for example, is actually seen as a good practice in a lot of circles because it provides experience to grad students with the review process. Those guest authorships? Very grey area between that and collaborative authorships. It's not the grunt work, it's the idea, right? Or is it that ideas are a dime a dozen, and actually doing the work is important? I can't tell which it is anymore -- it seems to depend on what benefits those in power.

Someone else posted something about how 1% of research is fraud, and 80% is bad. I think the percent of fraud is probably higher, the percent bad research is lower, and the difference is much more fuzzy than you'd think initially. The really difficult thing is that tiny incremental contributions is how things actually work. No one wants to admit this though. Bad research is actively incentivized, and there's credit bubbles everywhere.

The worst problem is that this credentialing bubble is everywhere with everything, as another posted noted. The problem isn't the credentialing per se, it's how it's detached from reality, the real demands of the tasks. Having a credential doesn't mean that the person is competent for all the tasks it nominally encompasses; conversely, those tasks don't necessarily require the credential that's often demanded.


[flagged]


Ad-hominem aside, do you have anything to add to the discussion? I have been in academia for quite a few years, but I've been in the industry for longer than that. I state the point I stated because I indeed know first-hand what researchers do and what engineers and product managers do. What do you have to add to the discussion?


You're just being personally abusive.


For getting a job, maybe…industry isn't everything.


I think his point is that practical experience and a track record of proven, successful, outcomes are more useful for predicting future outcome.


A good PhD is launching a successful project.


" Products launched. Teams managed."

This definitely has zero to do with quality research being published, and even in startup land, those things are nice to have but still don't prove anything, unless they were the directing agents of those initiatives.


Those are even easier to hack. Here's the result of measuring by projects launched: https://killedbygoogle.com/


These metrics can just as easily be gamed, see how all the glory at Google goes towards launching exciting projects, that then immediately get put into stasis, then canceled.


For most, credentialism works fine in academia.

The problem is that you're incentivized to cheat, because it's either that or your career.

Academia suffers from incredible power asymmetry, and an obsession with prestige. It's shit culture that rots from the head and down.

Of course, the vast majority of research groups are not like that - but it seems to be a problem which has always been there, but just been swept under the rug.

Again, it's difficult to find informants. Groups are small, and most advisors have very few people under their wing. It is probably very easy to identify and black-ball whistleblowers. It gets more difficult, the more you're invested in your work/degree.


Allow me to offer some qualities other than competency that are necessary for responsible research.

1. Purpose. what is it that drives one?

2. Integrity. when does one compromise it?

3. Awareness. is one able to disengage when one's ego (fueled by credentials) is activated.


Can't agree more. And it's getting worse. In certain fields people are almost untouchable. Couldn't count anymore the number of discussions I had about doctors and how absolutely incompetent some of them are: the usual reaction is like 'No, you don't say! But the selection process ensures only the smartest get selected,so what are you on about?'. The second one is PhDs, and in general all sorts of professors almost elevated to god like level. You almost get a free pass to spread whatever shit you want if the credentials are right.

The corporate world is exactly the same. Since my last promotion,I now have a nice title. People now listen to what I have to say. They even take me seriously. I can now go to some guy who's got 20 times more experience and start selling my consulting services(I'm not a consultant).


>Couldn't count anymore the number of discussions I had about doctors and how absolutely incompetent some of them are: the usual reaction is like 'No, you don't say! But the selection process ensures only the smartest get selected,so what are you on about?'. The second one is PhDs, and in general all sorts of professors almost elevated to god like level. You almost get a free pass to spread whatever shit you want if the credentials are right.

It's like the old joke: what do you call the worst-performing graduate from medical school (or a PhD program), the one who was at the bottom of their class? You call them "Doctor"!


I'll trade my credential for a few million any day of the week.

Never have any takers, I wonder why.


Credentials are non transferable and non-voidable unless you are later discovered having cheated. I don’t get what point you’re trying to make here.


> The root of the problem is that we allow people with high esteemed credentials to run society.

If high credentials run society, surely somebody would offer me a bribe of a few million dollars by now, in order to make use of my 'society running' abilities.


You forgot a step:

"Your parents are rich, so you have a chance."


Publications really only look impressive to outsiders and gra d students.


Surely you mean the number of publications, or some other superficial metric (e.g. being the third "supervising" author on a paper with a dozen authors).

If I read a paper and have a great appreciation for the ideas, and have a sense that an author contributed significantly to the aspects I appreciate (either from explicit descriptions of the authors' contributions in the publication, or by speaking to them), why would that not be impressive to me?


I work with a group that got a pretty ground-breaking tech paper published in a top medical journal. They estimated it generated $100M in investment for the group. Over 2-3 years. I think the original investment for the work that produced the paper was about $1M. FWIW.


I beg to differ. Publications are impressive because they are, in the very least, an indicator of the volume of work that a research group does in a domain. Writing a paper requires time and effort and, more importantly, focused work. As anyone who works and ever worked on academia is able to tell you, finding time to do anything of value, whether is exploring a new idea or continue working on any of the ideas you might have floating around, is the biggest challenge.


So, some years ago, Temple Grandin wrote a set of standards and McDonald's adopted it and they buy so much beef that it became the de facto new standard for the beef industry. And it's a set of standards that helps beef producers succeed rather than a "gotcha" trying to find who is guilty.

And that's the way you make the world a better place. Not by looking for new and creative ways to nail "bad guys" to the wall after you started from an assumption of guilt.

I don't like this article. I don't like it at all. My feeling is that it was written as an emotional response to the pandemic and it is getting traction on HN for the exact same reason.

People are stressed out and they are looking for a villain to go after. It won't fix the real problem -- the pandemic -- but that's how people tend to behave in a crisis.

And it's a slippery slope towards a more draconian world. It doesn't make things better.


The McDonald's story reminds me of the Brussels effect [0] where legislation in the EU is extending (not by law but in its effect) to other parts of the world because it is easier or cheaper to comply with it for all customers than to treat EU costumers and others differently.

[0] https://en.wikipedia.org/wiki/Brussels_effect


So that's why I have to accept cookies all the time.


No, this is either because they have no idea what they are doing, or because they like bossing people around in return for not getting their way regarding user data while pretending this behaviour is because of consumer-unfriendly EU regulations.

It's perfectly fine to set cookies that are necessary for providing the service without having to ask the user.

They only need to get one's permission if they want to do other things like selling data to advertisers.


A friend of mine was doing a chemistry phd when he discovered his supervisor was falsifying data. If he had blown the whistle it would have ended his career, but if he had played along his dissertation would have been based on false data. Both options were bad so he just quit.

https://www.statnews.com/2016/11/25/postdocs-grad-students-f...


I was doing a robotics engineering PhD at a highly ranked university a few years ago. I contacted the dean of engineering and the office of legal affairs and informed them that my advisor had submitted falsified data to journals, falsified financial statements to his sponsors and the university, performed experiments risking serious bodily harm on human test subjects without IRB approval, committed wage theft against multiple students, and slandered several of his research assistants to keep them from getting funding and work outside of his control. They swept it under the rug and gave him tenure. This was one of the universities that was in the news for faculty members and administrators accepting bribes from celebrities a year or two ago. They added another billion to their endowment and I left with my MS.


That's some guts to do the right thing. Respect.


Huh? But he didn't ... he just fled.

When I started my PhD I became tangled in a similar situation. I denounced it and it got me kicked out of that program. I had to start over, but I would do it again if the situation called for it. No amount of money or "awards" compensate for having a dishonorable life.

"The only thing necessary for the triumph of evil is for good men to do nothing"


I was not faced with such a shitty situation, and I probably would have done the same as well, but the right thing would have been to report it.


If you report it then the grant money is gone and everybody in the lab loses their job and all the PhD students are derailed.


That’s the fault of the person falsifying data, not the person reporting the fraud.


Yeah but it still happens, and to people you care about who didn't deserve it.


That's true. Several people will probably lose their jobs.

But that is the kind of thinking that condemns whistleblowing for the wrong reasons. To put it differently, people rather prefer to keep on pretending if that keeps their jobs than to do the right thing. And that makes everyone a fraudster.


Ideally, yes! That would be the best outcome.

I don't know why people always see dens of corruption and think that they ought to be protected, because of the jobs of the folks doing the corrupt work. Jobs are not the point of life, and we should accept that some jobs are bad jobs which should be eliminated.


In most fields this is unimportant. Bad research can have significant public policy impact.

Sure, it sucks for the students, but you have to introduce accountability in academia for it to stop being worthless.


I'd argue that a lot of what the authors describe isn't actually "fraud", it's more exploitation of an intentionally broken system.

I've definitely had authors on my papers who didn't do work. I've definitely written papers for people who didn't do work. I've definitely done peer reviews on behalf of PIs. Why do people do this? Because the regulators allow it and they want the system that way. Why should who wrote the paper have any impact on review? Why should it matter who the journal editors are? Why should it matter where the paper is from? Etc...


Because we use heuristics instead of actual analysis.

How many people citing a paper or reading it actually have time to think that deeply about it?

The alternative to a world of trust and heuristics is a world where we are all bogged down trying to make decisions.

This is heavily demonstrated in recruiting. Resume reading is about 7 seconds a person. How long would it take if they spent a minute for each?


Interesting notes from the paper mentioned in the article: https://journals.sagepub.com/doi/pdf/10.1177/174701611989840...

- "only 39 scientists from 7 countries have been subject to criminal sanctions between 1979 and 2015 (Oransky and Abritis, 2017)" That seems...very low.

- "The Retraction Watch database—the largest of its kind—currently includes more than 18,500 retracted articles (Retraction Watch database, 2019). A recent analysis of 10,500 retracted papers up to 2016 showed that 0.04% of papers are retracted." This is once again a lower-bound; presumably if you account for additional authors and p-hacking the numbers go up a lot.

Pushing for replication and improved methodology can help, but some of these issues seem to be related to scale. There are many more people outputting papers than there are people willing to vet them (outside of peer review). Furthermore, when you have many people researching hot fields, you should expect false positives and overestimates to dominate published results, even when everyone is trying to practice good statistical hygiene. (https://journals.plos.org/plosmedicine/article?id=10.1371/jo...)


Basic set of checks-and-balances:

* Preregistration and adoption of open science practices

* Public access to research results, methods, and data, with some exceptions (such as PII)

* Federally-funded universities can't use NDAs or non-disparage agreements

* Federally-funded universities must respond to records requests under terms similar to FOIA (note that FOIA has requestor pay costs)

* Federally-funded universities must adopt transparent governance

* Salary caps at federally-funded universities and affiliated organizations

* Conflict-of-interest laws with hard enforcement

* Federally-funded universities must publicly publish research misconduct and alleged research misconduct. The latter is tricky, since you don't want to smear the researcher without proof, but you also don't want to trust results.

This really needs reform.


Salary caps? So the best people go into the private sector where they can actually get paid?


Agree, I don't think salary caps have much to do with the poor alignment of incentives under discussion here.


Don't think $90k. Think $300k. There are plenty of top people glad to be academics for $150-$200k, and beyond that, you don't get an increase in quality. On the other hand, you avoid a lot of problems with misaligned incentives at the top.


> * Public access to research results, methods, and data, with some exceptions (such as PII)

You mentioned PII, so I'm assuming some familiarity with the health field. I'm curious about your thoughts on the position that one should not be required to immediately publicize their data, because there needs to be an expectation that a researcher can translate the capital (both time and money) they expend to acquire quality data into academic and institutional capital (in the form of research output, i.e. papers). The fear being, there might be insufficient motivation to conduct large data collection-oriented studies due to another researcher beating the data collector to the punch in terms of publishing certain findings.


My opinion is that it'd be better to handle that from the other side: incentive structures. If I generate a useful data set, I should get the equivalent of citations/publication/career credits for any work from that dataset. Enabling science is just as important as new results.

But I don't care much, so long as it gets published within a sensible timeframe.


At the very least it seems like it would be fraud not to include the original collector of the data as a co-author on the analysis paper.


I know at least one professor where one person generated the data. Another person cleaned the data. The professor got her faculty position by taking the cleaned data, running a regression on it, and publishing the first result. The original research team could have done the work in 15 minutes, but wanted to hold off on publication.

It got a ton of citations and press for her.

The research was at MIT. I won't mention where she's a professor, out of interests of privacy. I know several similar cases at MIT too.

But if it was fraud, what would one do about it? Screaming about this sort of thing kills everyone's careers, and embarrasses the institution the research was done at. It's no good for anyone involved. People move on. The whole system incentivizes this sort of fraud, and faculty positions are hypercompetitive, so people follow those incentive structures to be successful.


> Federally-funded universities can't use NDAs or non-disparage agreements

Many companies are only willing to enter into research agreements with a lab providing that lab is willing to sign an NDA. This would prevent companies like Google, NVIDIA, Apple, and Facebook from working with research labs. This is especially short-sighted since a student who is working at a federally funded university will have to publish their work.

> Federally-funded universities must respond to records requests under terms similar to FOIA (note that FOIA has requestor pay costs)

This is already true. I have known many faculty whose emails were FOIA-able and as a result, they preferred in-person communication for sensitive topics.


> Many companies are only willing to enter into research agreements with a lab providing that lab is willing to sign an NDA. This would prevent companies like Google, NVIDIA, Apple, and Facebook from working with research labs. This is especially short-sighted since a student who is working at a federally funded university will have to publish their work.

There are different types of NDAs. I'm more concerned about the ones which are used to silence whistleblowers than the types used to protect pre-release product information, PII, or partner trade secrets. That's not a difficult split to make. Tools include: (1) Time bounds on NDAs. (2) Domain bounds on NDAs. (3) Publishing the agreements themselves.

This goes for a broader set of abuses too -- not just research fraud, but also sexual abuse, gross negligence, corruption, etc. A lot of this stuff goes on at elite schools, and lots of people are bullied or bribed into signing their rights to talk about this stuff away.

> This is already true. I have known many faculty whose emails were FOIA-able and as a result, they preferred in-person communication for sensitive topics.

This is not true in general, but it is on some specific government projects. Most normal grants (NSF, etc.), it's definitely not true on.

As a footnote, odds are if faculty weren't willing to conduct business by email, something improper was going on. That thought process is common in a corporate setting, but not in an academic setting.


> The latter is tricky, since you don't want to smear the researcher without proof, but you also don't want to trust results.

IMO it's the build-up of univestaged allegations which makes them so damning. If we actually had justice the upper classes, small allegations were frequent, and investigations were de rigueur, we could actually have functioning "innocence until proven guilty" in the culture.

Relatedly, we should probably all have minor criminal records and it should be no big deal.


let's add

* Portion of faculty which must be tenured.


I've read that before 2003, the whole of humanity has published as many scientific papers as from 2003 to 2016.

So what happened around 2000? Who has turned the scientific mission into a blind competition for superficial metrics? So many people in science I meet (apart from the few who benefited from this system, and therefore were selected by it) are frustrated by publishing for the sake of publishing (not science) and the bad incentives this system creates.

Who has thought that these superficial metrics would improve anything about science and why?


With any exponential growth curve, you can point to some semi-recent point on the curve and say 50% is on the right side of that point. If the population is growing exponentially (it is), and the percentage of the population in academia is consistent or growing (that I'm not sure), then you could reasonably expect the quantity of research to be growing exponentially as well. Maybe it's just a curve with a doubling period of 13 years.


I think it’s mostly a numbers game, more PhDs, roughly the same number of research position. This means most scientists don’t know most other scientists in their field, and so the metrics people use to select become more important. That and the increased competition, means a bigger rat race, more pressure to publish, etc.


I imagine the internet has done a lot to increase the speed of research, communication, and writing. I imagine it also speeds up peer review and has expanded the amount of available places to publish your work. I agree though, it would be nice to spend more time sciencing and less time writing about it.


Academia is not unique in this regard. Superficial metrics have been introduced into many fields with similar results. Warnings about the danger of metrics have been ignored.

Campbell's law: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." (1979)


0) the internet has made citing (and the expectations on knowing everything) so much easier. 1) China entered the game en force 2) everyone else has to step up 3) companies entered the market 4) in some areas, an arxiv (which is good) full of errors/hand-waving gets some 100 citations. Peer-reviewed work not.


My guess is the education bubble. Loans are subsidized, universities have bigger budgets and want more studies done.


I think one of the big things that you can do is split up the data gathering and data analysis.

Kind of like we don’t trust the companies to audit themselves, instead we have an outside firm.

In this model, a researcher would create a hypothesis and collect the data. Their team would write the background and methods sections of the paper.

Then the entirety of raw data would be sent off to a third party for data analysis and they would write the results part of the paper.

The original team would then write the discussion part discussing the implications of the study.

All papers would be required to be made public.

The idea is that there would be specialized firms that do the analysis on the raw data for everyone. These would be carefully audited and certified by the government. They have no incentive to play statistical games. If they get caught cheating, then they have to pay for all the analysis on all the papers to be done again by a competitor and if any errors are found in a paper analysis it is automatically retracted. While the this re-analysis is going on, all papers would be quarantined with a note stating the paper is having its analysis redone. This incentivizes the analysis firm to be ethical, as well as incentivizes researchers to pick ethical analysis firms.

Separating data collection from data analysis would help align incentives better.


That's an interesting idea that could stop very specific types of fraud, certainly in the life sciences. But it's not feasible for all kinds of research, and in fact could hinder lots of research.

> All papers would be required to be made public.

This is more universally feasible. Publicizing the data and analysis tools (scripts, software) falls into the same category, and would go a long way to help without the need for such strong separation.


You would just get people massaging or generating the data to fit a conclusion prior to sending it out for statistical analysis. If people want to cheat they can find a way. The only way to really do what you are getting at is to construct core labs or CROs to run all experiments on behalf of the investigators. This is not infeasible in many cases (and is already done in narrow ways) but it requires hiring staff scientists to run every experiment rather than grad students or post docs and costs / complexity will explode.

The real way to defend against a lot of fraud is to force people to submit actually detailed methods sections so experiments are legitimately reproducible (they largely aren't now). This would catch a lot more fraud quickly, although even this won't fully work as some experiments are simply too costly to reproduce for validation purposes (eg animal studies).


This.

And also actually fund researchers who do reproducibility work. May be even fund specialized teams that do only reproducibility work.


Research fraud gets a lot of attention because it is so black and white. But it is a symptom of larger problems. One issue is that the pace of progress is slowing, and as a result incremental gains are more prominent. This is fertile ground for fraudsters, as they can produce results which are plausible enough while seeming to be an important contribution to the field. All fraud fits into this category. Nobody makes a new grand unified theory of everything which they know is bogus. That would be too much work for a start.

The other issue is the huge expansion in university size. Most of the fraud I've seen or heard about all happens in university research departments. This shows you the importance of their incentive structure. One can make things up and not only succeed, but do better than your competitors in this research setting, AND get a tenured position with life-long security. All competitive fields where achievement recieves external and highly persistent rewards suffer from this problem, whether it be sport and performance enhancing drugs or Ivy league univesrity admissions or even venture captial funding (Theranos).

The natural response is to ask for more regulation and structural change in how research is conducted eg pre-registration, different statistical standards etc. But this has the major disadvantage of making life harder for the honest people. It also requires the creation of some parallel work force to handle all the checking. Research is already so difficult. Paradoxical effects, where such measures increase fraud, are definitely possible.

There will never be zero fraud. The aim should be to change how research is done to make the experience more humane, train and mentor young scientists carefully and avoid perverse incentives. As far as I can tell, nobody has any idea how to do this. Instead they want to create investigatory bodies which will siphon off money that could be used for research, and then ruin lives pursuing some key performance indicator like N successful fraud cases per quarter. This experiment was already run in the USA with the Office of Research Integrity, and it failed. Malcolm Gladwell, who I am not a fan of in general, has a good podcast about it [1].

[1] http://revisionisthistory.com/episodes/28-the-imaginary-crim...


Bluntly put, peer review is an overwhelming, unpaid, unsatisfying, time-expensive task. The only step forward is to force the release of both data and implementation, at least for the highest ranked journals, this possibly opening more unpredictable cans of worms. It is the elitist academy model on one side, not working any more, against the democratisation of research, still and just a torrential flow of noise.


You can't really improve this until the lead author doesn't have a final say on how to treat outliers.

In any given study, there are going to be hundreds of special cases in the data that you didn't anticipate, and you have to decide whether to include or exclude them.

Any researcher will subconsciously be more sympathetic to arguments to exclude subjects that go against the principal theory, and less so to subjects that confirm it.

And it's a battle of reasonable arguments, most of the cases aren't bright line fraud or misconduct, they're just humans finding some arguments more compelling and the impossibility of escaping our own biases. (And yes, sometimes it's fraud, but fraud is just the tip of the iceberg if we're talking about genuinely improving the reliability of scientific findings.)

Prepublication is helpful, but more and more I'm convinced that the only way to do proper science would be to completely disaggregate study design from study execution.


Similar to research fraud, I want the medical field examined for anti-science fraud.

I caught my Physician recommending an expensive and dangerous surgery that could be done by a dentist or surgeon. I asked if there was data, she said yes. There was no data. And the trend was using lasers rather than surgery since it's safer. I confronted her and she said-

"If you ask a physician, they will recommend a physician. If you ask a dentist, they will recommend a dentist."

This physician used factionalism rather than science.

I imagine this has happened on a massive scale.


I think there is outright fraud and subtle fraud.

For instance, most mouse studies specifically as it relates to aging, drug safety, and cancer research should be thrown out. This is well covered here:

https://m.youtube.com/watch?v=pRCzZp1J0v0

That’s not the only issue, but it’s a known issue, everyone is ignoring as it relates to all studies with the most common mice


This Weinstein mouse model thing is blown completely out of proportion, and is only so prominent because his own brother Eric put it on The Portal.

The point is that everyone knows mouse models are imperfect, but they are still useful. Pointing out one of the ways they are imperfect is fine. Pretending this is a nobel prize winning earth shattering discovery that is suppressed by the establishment is just annoying. Research about how mouse models are imperfect comes out all the time, eg there was a paper about how the relatively sterile lab environment changes the murine microbiome and immune system. Why aren't these researchers on Joe Rogan's podcast?


I generally agree people understand mouse models are imperfect.

The issue here, is that you have to understand how they are imperfect if they have any value at all. In this case, doing drug research on a mouse model is fine. Unless the goal of that study is to see damage to organs or dna. Unfortunately, that’s what was not being controlled for or understood. Effectively, the DNA has more protection from damage in the lab micr used for models than normal mice or humans.

This implies most of our drug evaluations look better on paper than reality. Huge deal if true And there’s evidence (long term) to support this.

The reason it’s on the portal and Joe Rogan is no one in the scientific community is evaluating it (on the surface). What’s more baffling is working in and around this space, I can see the effects first hand (which I won’t dive into, for the same reason no one is saying anything about this in the scientific community).


He specifically mentions comments like yours are not helpful. This is the researcher fraud that is the problem in science that we are talking about, issues at hand are ignored or twisted so the paper mill can continue.

The concept is all mice in the USA come from one laboratory and have a common attribute that's different from wild mice and European suppliers.

And it's spelt out how in theory this different attribute could change medical results.

To come back with we don't care, without proof it's not a issue, is a deeply broken system, but is exactly how current universities work.

Here's the actual segment (18 mins) - https://www.youtube.com/watch?v=ve4q-1D_Ajo

Related (12 mins) - https://www.youtube.com/watch?v=8ygLNOt43So


I've heard everything he said. I'm well aware of the concept. Again, everyone knows that all mice come from the same place (because it's written in the papers), and that they are very different to wild mice or mice from elsewhere.

I also didn't say we don't have proof that it is a problem. Again, all models are imperfect, but some are useful.

I am telling you what I see as the significance of Weinstein's findings as someone that works in the field.

It is nonsesnical to me that this particular concept is suddenly so prominent because of Joe Rogan, whereas a multitude of other issues with research and human translation are not talked about at all. To me what this is really about is Weinstein's perception of the significance of his findings. He, like a lot of scientists, thinks his work is hugely important and he deserves more credit. Others do not. This happens all the time. The only difference is that he has found a way to disseminate his findings to a non-specialist audience.


I usually do this exercise with my students when they tell me they understood what I just explained: I ask them to explain it back to all the class.

Could you do that for us here? Explain to us what problem Weinstein detected, to what it relates to, and its consequences.


You are still saying what Weinstein said you would say.

Weinstein has laid out a case. It theoretically makes sense. It is quantifiable. He makes a good argument it is massively significant. If confirmed it is fixable. If confirmed it means things in other fields.

This is in pop culture, anyone researching with mice should have heard of it, so the fact no one can whip out an article disputing it says a lot.

Science is broken and scientists are not to be trusted. I think this is a good test case how science deals with this and if it can actually move forward. It needs a good rebuttal to address the issue.


I don’t think you are listening to me. I am not disputing his point. I think it could well be valid and is certainly plausible. The idea is pretty clever. Attempts should be made to understand it more and minimise the effect. Great, add it to the list of problems with mouse models. It certainly isn’t at the top.


Honestly, the video only 18 mins (9 mins at 2X) long. Just give it a watch.

He says it's the top. He explains why it is the top. It is logical and well thought out with theoretical and practical evidence it is the top.

I'm happy to consider it's not the top.

But you have to actually say what is at the top.


The link provided is a three hour video and the speakers are pretty vague about what exactly they're talking about. By 2:20 one of the them is claiming that "we're staring at many scenarios that end in some kind of civil war", which seems extreme if the video really is about mouse studies. Is there really any useful scientific content in there? If so, a timestamp would be appreciated.

Moving on to other sources, are you referring to lab mice having extra long telomeres and thus unusually high capacity for cellular renewal, as in [1]?

[1] https://pubmed.ncbi.nlm.nih.gov/11909679/


They go over multiple topics. Check the top comment for timestamps.

If you want to hear the full story I suggest listening to this podcast episode: https://www.youtube.com/watch?v=JLb5hZLw44s.

It's long, but I found it very interesting.


I’ve seen lots of “questionable” research, so much, that I rarely read papers published by other researchers because chances are the data is junk, and I’m in academic biomedical research!!


Makes me think of Hans Lehrach. Dunno why

https://www.molgen.mpg.de/hans-lehrach


Everything in [AtOmXpLuS](https://www.atomxplus.com)


Starting to think it's time to close down the university system and start a new one...


Sure, but how would you organize the new one?


I'd just hope new alternatives would emerge as the old system gets out of the way.

Then we'll see what develops.

Not a selling argument, I know :)


Coincidence for me to see this here, as a book by Dr. Stuart Ritchie was just published about this very thing.

https://www.amazon.com/Science-Fictions-Negligence-Undermine...


Certain groups have been serious the replication crisis for 10-15y, but academic culture at large is simply not cut out to discuss fraud in a 'street epistemology' sort of way, such as the way security researchers might discuss cybercrime.

There's a wild amount of pushback to any amount of meta-criticism. But once you get past that point, many roadblocks remain.

In particular, there's extreme bias for meta-statistical methodologies that infer QRPs over other methods of investigation. Often, these methods aren't strictly necessary in context, and afford the opportunity to turn the metasci discourse into endless bikeshedding about the meta-framework rather than the object of dispute.

Many interested parties will participate/perform in this discourse, but few will sit down and really look at things as simple as the logical structure of the paper's claims, or even simpler problems with its content.

In the case of social psych, for instance, many problems lie with (a) stimuli and (b) unexamined assumptions on the part of the researchers that a certain manipulation holds, so these are important to assess. But extending critique to these things is seen as "reviewer 2" behavior — uncollegial, unfair, sniping, waaah etc.

Since "researcher x produces bad research, but evidence of fraud is only circumstantial" isn't sufficient grounds for doing much of anything, little comes of these efforts. When an investigation does nail a fraudster to the wall, well, so? The papers remain, the poisoned citation tree remains, the culture remains.

More than anything, the pushback in the form of tone-policing is what gets me. "Methodological terrorism" this, "reviewer 2" that. It shows how far from consequences the gatekeepers are. If you're a young person, whole branches of the academy have been pre-bankrupted for you. There's no hope there, short of a research path that manages to avoid citing any prior literature. But don't get tetchy with the grantlords!

Meanwhile, all around the US, real effects of this dogshit excuse for scientific inquiry can be seen every single day. Police departments are adopting new policies around known-bad implicit bias papers. These won't work. We know they won't work. We've known this for years.

What would make it stop? Every time cops kill an unarmed person, academics who still haven't retracted their implicit bias papers get fined?

There's the rub. You can't do much to force the point. Effective, timely measures would be fairly brutal ones, and academics aren't ready to admit this to themselves. In some ways, COVID-19 may end up being one of those measures, though it will disproportionately affect younger researchers.


Equalize the field, radically remove research hierarchies. Liberate research funding from pointless bureaucracies and 4-year grants and instead pay scientists to work on whatever they like most. Let science be fun again

I always return to this article from David Hubel (of Hubel&Wiesel): https://www.cell.com/neuron/fulltext/S0896-6273(09)00733-8

I arrived at his lab around noon and found him working alone, recording from an anesthetized macaque monkey. I asked him when he had started the experiment, and he answered “in the morning”, which I finally realized was the morning of the day before. So he had worked, by himself, all the day before, all that night, and that day until noon. What was typical, in that era, was not only the long hours but the fact that the project was done by one person, single handedly. The major papers were either by Mountcastle alone or in partnership with one other person. The leader of the physiology department was Philip Bard, but the idea that Bard should have asked to have his name on any of Vernon's papers surely never occurred to anyone.


> Liberate research funding from pointless bureaucracies and 4-year grants and instead pay scientists to work on whatever they like most.

Bureaucracy exists to ensure that funds are indeed spent on research instead of jet skis and shiny toys, and hierarchies are established to focus work on specialized topics pertaining to the state of the art of a field of research.

All your suggestions do is ensure that less funds are spent on research, and the few work that is performed cannot be any form of deep dive or continuous effort on a topic, thus resulting only in low-hanging fruits.

And by the way, the minute a researcher complains he needs help with his research is the moment you get a hierarchy.


bureaucracy cannot guarantee the quality of research beyond the jet ski bar

hierarchies are established to ensure that the PI gets citations from the work of postdocs and postgrads

typical grants encourage shallow trend-chasing or incremental continuation of the research of - often disinterested - senior PIs who simply get grants because they have gotten a lot of grants.

The chase of the state of the art is just another incremental step - that 's shallow

a research partner is not necessarily a subordinate


Bureaucracy exists to ensure the university gets their cut. When I was there the university took 40% off the top for overhead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: