Research misconduct, rather than error, is the leading cause of retractions in scientific journals, with the problem especially pronounced in more prestigious publications, a comprehensive analysis has concluded.
The analysis, described on Monday in PNAS, the Proceedings of the National Academy of Sciences, challenges previous findings that attributed most retractions to mistakes or inadvertent failures in equipment or supplies.
The PNAS finding came from a comprehensive review of more than 2,000 published retractions, including detailed investigations into the public explanations given by the retracting authors and their journals.
The project was intended to explore the types of errors that typically lead to retractions, said one author of the PNAS paper, Arturo Casadevall, a professor of microbiology and immunology at the Albert Einstein College of Medicine.
“And what we got blown away by was the fact that the retraction notices are wrong, in a lot of the cases,” said Dr. Casadevall, who produced the study along with Ferric C. Fang, a professor of laboratory medicine and microbiology at the University of Washington.
Research misconduct was found more prevalent in articles published by leading journals, including Nature, Science, and Cell, and its unexpectedly high rate should be taken as yet another warning that universities and grant-writing agencies are relying far too heavily on publication rates as a measure of scientific performance, Dr. Casadevall and Dr. Fang said.
“Right now we’re incentivizing a lot of behavior that’s not actually constructive to science,” Dr. Fang said.
Some hints have emerged on the size and scale of the fraud problem, as confirmed by Dr. Casadevall and Dr. Fang. Their other co-author, R. Grant Steen, a freelance writer and former associate professor of psychiatry at the University of North Carolina at Chapel Hill, has done work showing a surge in retraction rates in recent years.
But Mr. Steen had largely been attributing the rise to instances of plagiarism, which is now more easily found through the growing use of text-comparison software. Although a type of fraud, plagiarism doesn’t necessarily mean faulty data. And other recent studies—such as an April 2011 analysis in the Journal of Medical Ethics and an August 2006 study in the Medical Journal of Australia—showed error as the leading cause of retractions.
For their PNAS analysis, Dr. Casadevall and Dr. Fang combed through all 2,047 biomedical-research articles listed this past May on PubMed, a federally managed database, as having been retracted. Through that process, they found 158 instances where the reason for the retraction was listed as an error, but where other sources—such as court proceedings, media investigations, or inquiries involving the federal Office of Research Integrity—revealed an underlying instance of research misconduct.
The result is that of the 2,047 retractions, 67 percent were attributable to misconduct, Dr. Casadevall, Dr. Fang, and Mr. Steen wrote. Only 21 percent of the retractions were attributable to error, they said. The cases of misconduct often involved leading scientific journals, they said, matching previous research that suggested a correlation between fraud and a journal’s impact factor, which is a measure of how often its articles are cited by subsequent articles.
The risks to public health were illustrated this year by a report in Nature in which the pharmaceutical company Amgen described its attempts to independently verify a collection of 53 published studies concerning cancer drugs. The Amgen scientists found they could confirm the scientific findings in only 11 percent of the articles.
“This was a shocking result,” wrote the authors, C. Glenn Begley, an Amgen consultant, and Lee M. Ellis, a professor of surgery and director of the Colorectal Cancer Translational Research Program at the University of Texas M.D. Anderson Cancer Center, in Houston.
For Dr. Fang, the amount of misconduct in high-profile journals is a clear sign that researchers are facing far too much pressure from statistical measures such as publication rates and impact factors when seeking job promotions and grant money.
Rather than taking the time to use qualified experts to assess a researcher’s scholarship, Dr. Fang said, universities and grant-writing agencies too often use the statistical measures as easy proxies. That creates an enormous incentive for researchers to cut corners, or even fabricate study data, jeopardizing the reliability of the entire research enterprise, he said.
As an example, Dr. Fang said his department at the University of Washington recently had a job opening where all five of the finalists had a first-author byline in either Cell, Science, or Nature while working as postdoctoral students. “This was the price to get into the door, and then you have maybe a 20-percent chance of getting that job offer,” he said. “So this is too high a bar.”
Researchers seeking grant money and promotions feel that kind of pressure, exacerbated by budget cuts, throughout their careers, Dr. Fang said. By comparison, Dr. Fang said, he didn’t have any first-author papers in those leading journals during his postdoctoral career, “and I had four job offers at good universities.”
The medical journals, as a general rule, don’t deserve blame, Dr. Fang said. “They don’t exist to reshape the scientific enterprise,” he said. “They exist to publish high-quality science in an interesting and engaging way, and to publicize that, and I think they do a great job of that.”
Dr. Casadevall was more critical, saying that the misconduct discovered through their study was “the tip of the iceberg” and that journals needed to develop better standards. As an example, he cited the Journal of Biological Chemistry, which accounted for 27 of the 158 examples where a retraction attributed to an error was discovered by Dr. Casadevall and his team to actually involve misconduct. Part of the problem, he said, is that the journal has a policy of allowing retractions without giving any public explanation of the reason.
In such a setting, Dr. Casadevall said, “the misconduct is going through the roof because the rewards are disproportionate.”
The editor in chief of the Journal of Biological Chemistry, Martha J. Fedor, a professor of chemical physiology at the Scripps Research Institute, said she was confident that authors involved in retractions were held accountable through the journal’s practice of notifying the author’s institution.
“We have not had a policy of publishing statements about the source of errors in a manuscript that we are not able to verify conclusively,” she said.