top | item 23960387

Cracking down on research fraud

331 points| apsec112 | 5 years ago |undark.org | reply

201 comments

order
[+] gwerbret|5 years ago|reply
These issues of research fraud come up often, but the root of the problem is a bit more subtle.

In North America at least, biomedical research labs operate largely as fiefdoms of the individual principal investigators (PIs). The actual research work falls almost entirely on the backs of grad students and postdoctoral fellows. The grad students need to generate "good" data in order to graduate, the postdocs need the same in order to gain real employment (with only about 10% gaining faculty positions themselves after many years of postdoctoral training). The PIs need such "productivity" from their trainees in order to gain the funding that keeps the labs going. The PIs themselves face success rates in grant applications that are often 10% or lower and, particularly early in their careers, their job security depends almost entirely on their ability to secure grant funding.

These competitive pressures create enormous incentives for otherwise conscientious people, all the way along the hierarchy described above, to fudge their research data. Research fraud is thus a direct outcome of a fundamentally-broken approach to the structure of research funding.

There are exceptions to that approach, however. The not-for-profit Howard Hughes Research Institute [1], and to an extent the intramural research programs of the NIH [2], offer funding for PIs to do what they do best, without the pressure of competing for scant funds. Coincidentally, some of the best science comes out of these sites.

1: https://www.hhmi.org/scientists 2: https://irp.nih.gov/about-us/what-is-the-irp

[+] moralestapia|5 years ago|reply
>labs operate largely as fiefdoms of the individual principal investigators

To add to the injury, you wouldn't believe the extent of the actions to which some PIs become engaged in order to attain/preserve their 'power'. Illegal, unethical and pathetic.

The core problem is that PIs are human, and humans are flawed. The solution (if there is one) should take this into account, and somehow try to reduce it systematically, at an institutional level. The problem is that the ones that make the rules are not going to fight against themselves ... I wish I had something more to add, but that's it, that's the state of the field.

[+] canjobear|5 years ago|reply
Another variant of the incentive problem is more insidious. Many academics are driven by ego, and specifically a desire to be influential. One way to be influential is to have great, original, and correct ideas. This can be a good incentive, but coupled with our human ability to deceive ourselves, it can become pathological. I’ve seen researchers get convinced that their ideas are so beautiful and right-feeling that they just can’t be wrong, and they will torture the data and run experiment after experiment it appears to work. Sometimes this crosses into outright fraud without the researcher even realizing it——after all, the theory is right, so if the data don’t match it they must be wrong!

This problem seems much harder to fix than research funding, because the question is how to allocate prestige and influence, not just money.

[+] Balgair|5 years ago|reply
Aside: To give readers an idea of the financial pressures I have an example. One of my former professors told me that the lab spaces costs ~$45/sqft/mo from the university, 300sqft minimum. That's ~$160k/year just for the floor space in just the lab, not the office space either. Now, electrical, heat, vivarium, elevators, janitors, security, etc. are all rolled up into that. Bit that's not specific equipment costs, reagents, grad students, and the profssor's own salary. Granted, every university is different (just look at patent ownership clauses).

It's not cheap and you must win grants just to pay rent for your own apartment. Especially early on in your career, it's a real struggle.

[+] Gatsky|5 years ago|reply
I have to disagree with this 'good people gone bad' analysis. None of the research fraud I have witnessed fits this model.
[+] etrautmann|5 years ago|reply
HHMI does fund some of the most enormously successful work in biomedical science, but they start with the very best of the best. It's not clear how much of these labs success is the result of the precise funding structure, although it's obviously helpful to spend less time writing grants (or just have additional resources).

These labs are up for renewal from HHMI every 5 years, and as such, do face the same pressure to continue publishing high-impact work.

[+] dasudasu|5 years ago|reply
The topic of funding is a recurrent one in these types of discussions. There are always more people wanting to do science than what is available in funding. There needs to be a process to allocate resources that takes into account competency for the job. Just granting it at random or equally is probably very very far from optimal if it means that an exceptional PI cannot even do the things to push his/her research to the next level. Research funding is a bit like a command economy in that there are no market forces that can be drawn upon to just sort it naturally, so of course it ends up being especially thorny.
[+] raister|5 years ago|reply
> with only about 10% gaining faculty positions themselves after many years of postdoctoral training

There are no (steady) places for everyone, sadly. That's the way things are, where they demand postdocs to be extra productive, promising a permanent position somewhere (or a letter of recommendation, whatever).

Currently, the postdoc 'experience' has multiple names: research associate, research fellow, research engineer like it is a 'career' to pursue, with minimal benefits, salary attached to research projects (so it could stop at the end), etc.

[+] henriquemaia|5 years ago|reply
I agree with the subtle point behind the fraud crisis: misaligned incentives.

However, that doesn't excuse the individuals doing it. "Because jobs" is a terrible ethical excuse.

Nevertheless, you have an interesting point there.

[+] martincmartin|5 years ago|reply
What do you suggest as an alternative?
[+] tlb|5 years ago|reply
Estimating from my own reading, 1% of research is fraud and 80% is worthless for other reasons. Numbers vary between fields.

If that's the case, what's the argument for why we should spend time doing something about the 1%? Solving 100% of the 1% wouldn't change the overall situation much.

Possible arguments include:

- Fixing the 80% is hard, but fixing the 1% is satisfying (to the aggressively conventional-minded, at least.)

- The 1% is wrong in a more harmful way than the 80%. Perhaps falsifying data is worse than hand-waving conclusions.

So if the maximum upside is 1% of wrong research removed, and the downside is quenching some fraction of the good 19%, it's probably better to leave it alone.

[+] logicslave|5 years ago|reply
The root of the problem is that we allow people with high esteemed credentials to run society. They jumped through a hoop.

"You got straight As at 14 - 18 years old and got into an ivy league school as a result? Here run this venture fund."

"You got a PhD in Economics with a good publication by P-hacking your secret data? Here take a run at the FED with power over the US Economy. Your Phd shows that you are the man for the job."

"You got a PhD in ML by making some incremental improvement on some already existing model and then doing massive hyper parameter tuning? Here, become director of research at this big corporation."

Research will never be fully productive in this system, there are too many people who have too much to gain from gaming the publication system.

[+] opportune|5 years ago|reply
All three of those examples are missing a step of like 5-20 years in the middle. Nobody is running a venture fund as a 22 year old ivy grad (barring someone who's taking over for their family member, but in that case it has nothing to do with being an ivy grad), nobody is running the FED right out of a PhD, nobody is a director at a big corporation right out of a PhD.

Presumably once those people are in such high-power positions, they also have a track record of real accomplishments behind them; it's certainly possible they've lied and cheated their entire career but it's definitely less likely they'll make it that far that way.

[+] CannoloBlahnik|5 years ago|reply
This is absolutely not the reality of a PhD. On what planet do people with those backgrounds get those positions out of their PhDs? What's the basis for this entire comment?
[+] throwawayiionqz|5 years ago|reply
And the cherry-on-top: Once you are part of a research unit at big corporation, your annual bonus depends on your number of publications. Or why the pursuit of incremental but well marketed results never ends. Or why papers with 6+ authors among which only 1-2 substantially contributed is the new normal.
[+] Something1234|5 years ago|reply
Thanks for pointing out the problem with credentials.

Name a better indicator then for being competent.

[+] cosmodisk|5 years ago|reply
Can't agree more. And it's getting worse. In certain fields people are almost untouchable. Couldn't count anymore the number of discussions I had about doctors and how absolutely incompetent some of them are: the usual reaction is like 'No, you don't say! But the selection process ensures only the smartest get selected,so what are you on about?'. The second one is PhDs, and in general all sorts of professors almost elevated to god like level. You almost get a free pass to spread whatever shit you want if the credentials are right.

The corporate world is exactly the same. Since my last promotion,I now have a nice title. People now listen to what I have to say. They even take me seriously. I can now go to some guy who's got 20 times more experience and start selling my consulting services(I'm not a consultant).

[+] alexashka|5 years ago|reply
I'll trade my credential for a few million any day of the week.

Never have any takers, I wonder why.

[+] analog31|5 years ago|reply
You forgot a step:

"Your parents are rich, so you have a chance."

[+] sjg007|5 years ago|reply
Publications really only look impressive to outsiders and gra d students.
[+] DoreenMichele|5 years ago|reply
So, some years ago, Temple Grandin wrote a set of standards and McDonald's adopted it and they buy so much beef that it became the de facto new standard for the beef industry. And it's a set of standards that helps beef producers succeed rather than a "gotcha" trying to find who is guilty.

And that's the way you make the world a better place. Not by looking for new and creative ways to nail "bad guys" to the wall after you started from an assumption of guilt.

I don't like this article. I don't like it at all. My feeling is that it was written as an emotional response to the pandemic and it is getting traction on HN for the exact same reason.

People are stressed out and they are looking for a villain to go after. It won't fix the real problem -- the pandemic -- but that's how people tend to behave in a crisis.

And it's a slippery slope towards a more draconian world. It doesn't make things better.

[+] _Microft|5 years ago|reply
The McDonald's story reminds me of the Brussels effect [0] where legislation in the EU is extending (not by law but in its effect) to other parts of the world because it is easier or cheaper to comply with it for all customers than to treat EU costumers and others differently.

[0] https://en.wikipedia.org/wiki/Brussels_effect

[+] viburnum|5 years ago|reply
A friend of mine was doing a chemistry phd when he discovered his supervisor was falsifying data. If he had blown the whistle it would have ended his career, but if he had played along his dissertation would have been based on false data. Both options were bad so he just quit.

https://www.statnews.com/2016/11/25/postdocs-grad-students-f...

[+] oerjgoejq|5 years ago|reply
I was doing a robotics engineering PhD at a highly ranked university a few years ago. I contacted the dean of engineering and the office of legal affairs and informed them that my advisor had submitted falsified data to journals, falsified financial statements to his sponsors and the university, performed experiments risking serious bodily harm on human test subjects without IRB approval, committed wage theft against multiple students, and slandered several of his research assistants to keep them from getting funding and work outside of his control. They swept it under the rug and gave him tenure. This was one of the universities that was in the news for faculty members and administrators accepting bribes from celebrities a year or two ago. They added another billion to their endowment and I left with my MS.
[+] kovac|5 years ago|reply
That's some guts to do the right thing. Respect.
[+] OminousWeapons|5 years ago|reply
I'd argue that a lot of what the authors describe isn't actually "fraud", it's more exploitation of an intentionally broken system.

I've definitely had authors on my papers who didn't do work. I've definitely written papers for people who didn't do work. I've definitely done peer reviews on behalf of PIs. Why do people do this? Because the regulators allow it and they want the system that way. Why should who wrote the paper have any impact on review? Why should it matter who the journal editors are? Why should it matter where the paper is from? Etc...

[+] owenshen24|5 years ago|reply
Interesting notes from the paper mentioned in the article: https://journals.sagepub.com/doi/pdf/10.1177/174701611989840...

- "only 39 scientists from 7 countries have been subject to criminal sanctions between 1979 and 2015 (Oransky and Abritis, 2017)" That seems...very low.

- "The Retraction Watch database—the largest of its kind—currently includes more than 18,500 retracted articles (Retraction Watch database, 2019). A recent analysis of 10,500 retracted papers up to 2016 showed that 0.04% of papers are retracted." This is once again a lower-bound; presumably if you account for additional authors and p-hacking the numbers go up a lot.

Pushing for replication and improved methodology can help, but some of these issues seem to be related to scale. There are many more people outputting papers than there are people willing to vet them (outside of peer review). Furthermore, when you have many people researching hot fields, you should expect false positives and overestimates to dominate published results, even when everyone is trying to practice good statistical hygiene. (https://journals.plos.org/plosmedicine/article?id=10.1371/jo...)

[+] woofie11|5 years ago|reply
Basic set of checks-and-balances:

* Preregistration and adoption of open science practices

* Public access to research results, methods, and data, with some exceptions (such as PII)

* Federally-funded universities can't use NDAs or non-disparage agreements

* Federally-funded universities must respond to records requests under terms similar to FOIA (note that FOIA has requestor pay costs)

* Federally-funded universities must adopt transparent governance

* Salary caps at federally-funded universities and affiliated organizations

* Conflict-of-interest laws with hard enforcement

* Federally-funded universities must publicly publish research misconduct and alleged research misconduct. The latter is tricky, since you don't want to smear the researcher without proof, but you also don't want to trust results.

This really needs reform.

[+] DanielleMolloy|5 years ago|reply
I've read that before 2003, the whole of humanity has published as many scientific papers as from 2003 to 2016.

So what happened around 2000? Who has turned the scientific mission into a blind competition for superficial metrics? So many people in science I meet (apart from the few who benefited from this system, and therefore were selected by it) are frustrated by publishing for the sake of publishing (not science) and the bad incentives this system creates.

Who has thought that these superficial metrics would improve anything about science and why?

[+] nitrogen|5 years ago|reply
With any exponential growth curve, you can point to some semi-recent point on the curve and say 50% is on the right side of that point. If the population is growing exponentially (it is), and the percentage of the population in academia is consistent or growing (that I'm not sure), then you could reasonably expect the quantity of research to be growing exponentially as well. Maybe it's just a curve with a doubling period of 13 years.
[+] sgillen|5 years ago|reply
I think it’s mostly a numbers game, more PhDs, roughly the same number of research position. This means most scientists don’t know most other scientists in their field, and so the metrics people use to select become more important. That and the increased competition, means a bigger rat race, more pressure to publish, etc.
[+] amcoastal|5 years ago|reply
I imagine the internet has done a lot to increase the speed of research, communication, and writing. I imagine it also speeds up peer review and has expanded the amount of available places to publish your work. I agree though, it would be nice to spend more time sciencing and less time writing about it.
[+] BaronSamedi|5 years ago|reply
Academia is not unique in this regard. Superficial metrics have been introduced into many fields with similar results. Warnings about the danger of metrics have been ignored.

Campbell's law: "The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor." (1979)

[+] fock|5 years ago|reply
0) the internet has made citing (and the expectations on knowing everything) so much easier. 1) China entered the game en force 2) everyone else has to step up 3) companies entered the market 4) in some areas, an arxiv (which is good) full of errors/hand-waving gets some 100 citations. Peer-reviewed work not.
[+] lrnStats|5 years ago|reply
My guess is the education bubble. Loans are subsidized, universities have bigger budgets and want more studies done.
[+] RcouF1uZ4gsC|5 years ago|reply
I think one of the big things that you can do is split up the data gathering and data analysis.

Kind of like we don’t trust the companies to audit themselves, instead we have an outside firm.

In this model, a researcher would create a hypothesis and collect the data. Their team would write the background and methods sections of the paper.

Then the entirety of raw data would be sent off to a third party for data analysis and they would write the results part of the paper.

The original team would then write the discussion part discussing the implications of the study.

All papers would be required to be made public.

The idea is that there would be specialized firms that do the analysis on the raw data for everyone. These would be carefully audited and certified by the government. They have no incentive to play statistical games. If they get caught cheating, then they have to pay for all the analysis on all the papers to be done again by a competitor and if any errors are found in a paper analysis it is automatically retracted. While the this re-analysis is going on, all papers would be quarantined with a note stating the paper is having its analysis redone. This incentivizes the analysis firm to be ethical, as well as incentivizes researchers to pick ethical analysis firms.

Separating data collection from data analysis would help align incentives better.

[+] wjn0|5 years ago|reply
That's an interesting idea that could stop very specific types of fraud, certainly in the life sciences. But it's not feasible for all kinds of research, and in fact could hinder lots of research.

> All papers would be required to be made public.

This is more universally feasible. Publicizing the data and analysis tools (scripts, software) falls into the same category, and would go a long way to help without the need for such strong separation.

[+] Gatsky|5 years ago|reply
Research fraud gets a lot of attention because it is so black and white. But it is a symptom of larger problems. One issue is that the pace of progress is slowing, and as a result incremental gains are more prominent. This is fertile ground for fraudsters, as they can produce results which are plausible enough while seeming to be an important contribution to the field. All fraud fits into this category. Nobody makes a new grand unified theory of everything which they know is bogus. That would be too much work for a start.

The other issue is the huge expansion in university size. Most of the fraud I've seen or heard about all happens in university research departments. This shows you the importance of their incentive structure. One can make things up and not only succeed, but do better than your competitors in this research setting, AND get a tenured position with life-long security. All competitive fields where achievement recieves external and highly persistent rewards suffer from this problem, whether it be sport and performance enhancing drugs or Ivy league univesrity admissions or even venture captial funding (Theranos).

The natural response is to ask for more regulation and structural change in how research is conducted eg pre-registration, different statistical standards etc. But this has the major disadvantage of making life harder for the honest people. It also requires the creation of some parallel work force to handle all the checking. Research is already so difficult. Paradoxical effects, where such measures increase fraud, are definitely possible.

There will never be zero fraud. The aim should be to change how research is done to make the experience more humane, train and mentor young scientists carefully and avoid perverse incentives. As far as I can tell, nobody has any idea how to do this. Instead they want to create investigatory bodies which will siphon off money that could be used for research, and then ruin lives pursuing some key performance indicator like N successful fraud cases per quarter. This experiment was already run in the USA with the Office of Research Integrity, and it failed. Malcolm Gladwell, who I am not a fan of in general, has a good podcast about it [1].

[1] http://revisionisthistory.com/episodes/28-the-imaginary-crim...

[+] DrNuke|5 years ago|reply
Bluntly put, peer review is an overwhelming, unpaid, unsatisfying, time-expensive task. The only step forward is to force the release of both data and implementation, at least for the highest ranked journals, this possibly opening more unpredictable cans of worms. It is the elitist academy model on one side, not working any more, against the democratisation of research, still and just a torrential flow of noise.
[+] brownbat|5 years ago|reply
You can't really improve this until the lead author doesn't have a final say on how to treat outliers.

In any given study, there are going to be hundreds of special cases in the data that you didn't anticipate, and you have to decide whether to include or exclude them.

Any researcher will subconsciously be more sympathetic to arguments to exclude subjects that go against the principal theory, and less so to subjects that confirm it.

And it's a battle of reasonable arguments, most of the cases aren't bright line fraud or misconduct, they're just humans finding some arguments more compelling and the impossibility of escaping our own biases. (And yes, sometimes it's fraud, but fraud is just the tip of the iceberg if we're talking about genuinely improving the reliability of scientific findings.)

Prepublication is helpful, but more and more I'm convinced that the only way to do proper science would be to completely disaggregate study design from study execution.

[+] lrnStats|5 years ago|reply
Similar to research fraud, I want the medical field examined for anti-science fraud.

I caught my Physician recommending an expensive and dangerous surgery that could be done by a dentist or surgeon. I asked if there was data, she said yes. There was no data. And the trend was using lasers rather than surgery since it's safer. I confronted her and she said-

"If you ask a physician, they will recommend a physician. If you ask a dentist, they will recommend a dentist."

This physician used factionalism rather than science.

I imagine this has happened on a massive scale.

[+] lettergram|5 years ago|reply
I think there is outright fraud and subtle fraud.

For instance, most mouse studies specifically as it relates to aging, drug safety, and cancer research should be thrown out. This is well covered here:

https://m.youtube.com/watch?v=pRCzZp1J0v0

That’s not the only issue, but it’s a known issue, everyone is ignoring as it relates to all studies with the most common mice