Scientific journals have been retracting unreliable articles at rapidly escalating rates in the past few years, raising concern about whether research faces a burgeoning ethical crisis.
Various causes have been suspected, with the common theme being that journals are seeing more cases of plagiarism and fudging of data as researchers and editors succumb to mounting pressure to produce highly influential papers. As the editor of The Lancet, Richard Horton, recently told The Wall Street Journal, a widely cited paper is “your passport to success.”
But a key factor in the rise—the growing use of text-comparison software—may be playing a much larger role than many reports on the problem have suggested. Real rates of dishonesty could even be declining as the software improves and it becomes increasingly clear to all involved that fraud and plagiarism will eventually be caught.
The current generation of software came on the market only about six to eight years ago, and it’s now being widely employed by journals, said R. Grant Steen, a freelance writer who studies trends in retractions.
Mr. Steen, a former associate professor of psychiatry at the University of North Carolina at Chapel Hill, did a study that found that the age of retracted articles has also grown quickly. The average retracted article in 2000 was just five months old, but by 2009 its counterpart was 32 months old, suggesting that the growth in retractions largely reflects the ability of new software to find undiscovered problems lurking in journal archives.
PubMed, a database of biomedical literature assembled by the National Institutes of Health, shows that the number of retracted articles jumped from around 40 a year in the late 1990s to 205 in 2009, 294 in 2010, and 407 in 2011.
“It certainly seems like a lot of it is a detection issue,” Zachary C. Coble, a research librarian at Gettysburg College, said of the growing number of retractions in recent years.
The trend toward retracting older articles is clear to Harold R. Garner, executive director of the Virginia Bioinformatics Institute at Virginia Tech. The institute is home to eTBlast, a free Web-based tool for comparing the texts of journal articles. The tool can uncover plagiarism as well as indirect evidence of other problems, such as fraudulent data.
Mr. Garner said he had used eTBlast to dig through the archives of scientific journals and has found thousands of apparently undetected cases of plagiarism and other problems. Many journals are doing the same, using eTBlast or similar proprietary tools, he said.
The searches find mostly cases of duplicate text, often with multiple paragraphs of identical or nearly identical material, Mr. Garner said. Many of those cases involve authors repeating their own work with minor variations, apparently hoping to pad their publication lists. Many other cases involve uncredited copying of other authors.
And such findings can lead to the discovery of deeper problems, such as cases in which a plagiarist copied words while altering another author’s data to make the original findings more statistically relevant to the new paper, Mr. Garner said.
Lurking Problems
Even more problems may be found as search technologies grow increasingly sophisticated. Some software developed by government agencies can detect tiny computerized alterations in photo images, and that capability might eventually be used to find manipulations in photographic data, Mr. Garner said. Future advances might also help the detection software spot statistical discrepancies indicative of data fraud, he said.
Even with the rapid growth in detection, there are still plenty of potential problems lurking in the archives of scientific journals, Mr. Garner said. His own computer searches of nearly half of PubMed’s database of 22 million articles have found about 8,000 cases of “highly similar” articles by different authors. The searches, part of a federally sponsored project aimed at exploring the dimensions of the problem, identified 252 instances that Mr. Garner considered so serious that he contacted the journals involved.
Only about half the journals responded by initiating investigations, he said. And the federal Office of Research Integrity, which guards against scientific fraud for the Department of Health and Human Services, hasn’t renewed its financial support for Mr. Garner’s work, which is now several years old.
Even less is known about the extent of similar problems in other scientific fields. Ethical investigations are most advanced in medical research because it’s the largest federally sponsored scientific specialty and because thousands of peer-reviewed articles are available via PubMed, Mr. Garner said.
Beyond the improved detection technology, other forces could be contributing to the higher rates of retractions without necessarily indicating an actual increase in cheating.
One factor could be a general increase in vigilance among editors, said John M. Budd, a professor of information science and learning technologies at the University of Missouri at Columbia. Scientists are living in a “more vigilant society” in general, he said.
Retraction data also show a steady decline in the number of repeat offenders, Mr. Coble said. That could be another sign that the new detection technology is having an effect, making it harder to plagiarize or commit other ethical offenses.
“There’s an upward spike in retractions because people are finding it,” Mr. Garner said. “But I think at the same time, the number of new documents that could have ethical problems like plagiarism is going down because people are aware of the fact that they’re being watched.”
Pressure to Publish
Systems for ensuring integrity in scientific research nevertheless remain far from perfect. Researchers still face strong pressures to publish, and as scientific journals proliferate, experts acknowledge that those determined to cheat will continue to find a way.
One of the major deterrents, peer review, still has weaknesses. It was created at a time when the world of science was much smaller, and when both authors and reviewers were likely to be better qualified, Mr. Steen said. Disciplines now are more crowded, he said, and leading specialists in a subject now either can’t keep up with the demand for their reviewing services or don’t give papers the time necessary to ferret out some of the telltale signs of fraud.
The detection software might also be distracting journals from chasing the most serious cases of fraud by creating pressure to retract articles even if the authors are guilty only of accidental and minor instances of plagiarism, Mr. Steen said. That might be especially true given the growing number of international scientists whose cultural norms may allow for greater latitude in uncredited citations.
“We are certainly getting better at detecting it,” Mr. Steen said of the presence of possible fraud. “But I wonder whether we’re also lowering the bar to what sort of sin requires retraction.”
