In one of Dr. Seuss’s better-known tales of jealousy and prejudice, the Sneetches with stars on their bellies are considered superior to those without.
Now there’s more evidence that journals’ impact factors are similarly misleading.
A study published by three Canadian researchers has identified a two-decade-long trend in which the world’s top-ranked scientific journals are slowly losing their share of the most-cited articles.
The study, published in the November issue of the Journal of the American Society for Information Science and Technology, found that in 1990, 45 percent of the top 5 percent of the most cited articles were published in journals whose impact factor was in the top 5 percent—publications like Cell, Nature, Science, and the Journal of the American Medical Association. By 2009, that rate had fallen to 36 percent, the authors found.
“We’re still using the high-impact journals, but we are using them less and less,” said Vincent Larivière, an assistant professor of library and information sciences at the University of Montreal, who did the research with two colleagues at the University of Quebec at Montreal.
The team based its findings on an analysis of more than 820 million citations involving 25 million articles published from 1902 to 2009.
An impact factor is a statistical measure reflecting the average number of times a journal’s articles are cited over a two-year period. It was initially devised to help libraries decide what journals to subscribe to, but it has assumed a dominant role in evaluations of science itself.
The decline in the share of highly cited articles published by top-tier journals appears to be largely a reflection of the Internet, which allows researchers to find any paper they want, regardless of the journal where it first appeared, Larivière said.
Yet the trend dates to 1990, before the Internet went mainstream, suggesting other factors are at work as well, Larivière said. The most likely additional factor is the creation of article databases such as the Web of Science, owned by Thomson Reuters, he said.
There may be some irony, he said, in the fact that Thomson Reuters also compiles impact-factor data. “Because of their own tool, the predictive power of the impact factor has actually decreased,” Larivière said.
The study by Larivière and his colleagues also comes as some universities and faculty are already trying to find ways of de-emphasizing the competition for publication in prestigious journals, saying it’s a bad measure of scientific achievement that can encourage perverse incentives.
The findings, Larivière said, suggest that the main value of scientific journals remains the peer-review process.