Authors, you really ought to take a look at the journal articles you cite. Not only is it the responsible thing to do, it will save you the embarrassment of discovering after the fact that you have given a nod to a retracted or discredited paper.
That advice sounds obvious, but some scholarly authors still need to hear it, says John M. Budd, whose research shows that a large number of retracted papers continue to be cited long after they have been flagged as untrustworthy.
At a recent meeting of the Association of College and Research Libraries in Philadelphia, Mr. Budd, a professor in the School of Information Science and Learning Technologies at the University of Missouri at Columbia, shared preliminary findings on his most recent study of retracted publications in biomedical literature. It ought to put the fear of God, or the scholarly-communication equivalent of God—loss of reputation, maybe—into any researcher who has ever grabbed citations without checking them firsthand.
Mr. Budd has analyzed biomedical retractions for more than a decade. His work suggests that the problem is not going away, but is getting bigger. In 1999 he led a study indicating that, over a 30-year period, 235 scholarly articles in the field had been retracted, 40 percent of them because of some kind of scientific misconduct. Now, in a follow-up study, he’s analyzing data from the most recent decade with the help of Zach C. Coble, a master’s student in Missouri’s School of Information Science, and Katherine M. Anderson, a specialized-services librarian in the university’s Health Sciences and Veterinary Medical Libraries. From 1999 to 2009, they found, 1,164 articles were retracted from biomedical journals, 55 percent because of “some type of misconduct,” according to the paper Mr. Budd presented in Philadelphia.
“It does appear to be getting worse,” he told me in an interview. “Judging from our analysis, there are ever more statements of retraction, so the curve is still heading upward.”
He and his colleagues also wanted to figure out how many of those 1,164 papers had turned up in citations that appeared after the retractions. To give editors and publishers time to identify problematic work, the investigators counted only those citations that appeared a year or more after a retraction notice had come out. They came across 391 articles that cited retracted papers. Ninety-four percent of those 391 did not mention the retractions.
That leads Mr. Budd to speculate that some authors just aren’t seeing the retraction notices—which suggests that they aren’t looking at the articles themselves but finding them through more roundabout and less reliable means. True, some publishers do a better job than others of expediting and publicizing retraction notices. In most cases, though, the databases that researchers use will turn up the notices along with the articles.
“When you retrieve the original paper from any version of Medline or PubMed, you also retrieve the retraction statement,” Mr. Budd told me. “It’s a big red flag"—but one you’ll only see if you bother to search for the paper itself. “That’s what leads us to speculate that people are not going through a formal search for these papers—that maybe they’re going through another paper that cites the original research.”
Some explanations are more charitable than others. Mr. Budd points out that there’s more pressure on researchers than ever “to publish more and more and more,” and that it’s harder and harder to keep up with everything that comes out. Nowadays “there’s a lesser ability for people to track their literatures very carefully,” he says. “A human being can only absorb so much.”
A researcher may take one of several routes to find articles. “Authors might be searching in Google Scholar, bypassing retraction information in PubMed or tables of contents entirely,” Mr. Budd et al. wrote in their paper.
“There is much room for improvement on how the retracted publications are presented electronically,” they added. For instance, a notice in a journal’s table of contents can escape notice more easily than an all-caps electronic watermark that says “RETRACTED” on every page.
Look for a final version of Mr. Budd’s analysis this year. In the meantime, authors ought to pay attention to his advice: “The primary message, I think, is that if people are going to use the work of others as part of their own research, they need to take special care to go back to original works to make sure that the work is legitimate, that there are no problems with it, and that their own work will not be contaminated” by others’ problematic work.
The editors of the Web site Retraction Watch agree with him. “If you’re citing a paper, you should have read that paper. Evidently that’s a controversial point, but it shouldn’t be,” says Ivan Oransky, executive editor of Reuters Health, who is one of the two medical editor-bloggers who run the site. He and his co-blogger, Adam Marcus, founded Retraction Watch in August 2010. It gets about 100,000 page views a month, Mr. Oransky says—an indication of how hot a topic this is. “As we found out, there’s a tremendous amount to cover,” Mr. Marcus said when I talked with him.
He and Mr. Oransky are on a campaign to get publishers and editors to be as transparent about publicizing and explaining retractions as possible. “Part of what the Budd paper shows is that these things linger, and rarely are they put to death,” Mr. Marcus said. “The papers seem to have life beyond retraction, which I think is an important reason that the scientific community, the ones who are going to be reading these things, need to know as much as possible about them.”