Universities and funding agencies typically measure the value of published research by the number of citations an article attracts or by how often the journal in which it appears is cited. Both methods have long been accepted as imperfect but necessary shorthands.
Going beyond pure citation numbers to assign value to an individual article can be both complicated and uncertain. But one leading attempt to do just that took a major leap forward on Tuesday with the formal endorsement of a team of analysts at the National Institutes of Health.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
Universities and funding agencies typically measure the value of published research by the number of citations an article attracts or by how often the journal in which it appears is cited. Both methods have long been accepted as imperfect but necessary shorthands.
Going beyond pure citation numbers to assign value to an individual article can be both complicated and uncertain. But one leading attempt to do just that took a major leap forward on Tuesday with the formal endorsement of a team of analysts at the National Institutes of Health.
Known as relative citation ratio, it works by counting an article’s network of citations, then weighting the result by using a comparison group within the article’s field. The developers of relative citation ratio said its methodology therefore better reflects how experts assess the influence of a paper, rather than just its total number of citations.
“In that context,” said one of the ratio’s developers, George M. Santangelo, director of the NIH’s Office of Portfolio Analysis, “our claim is that this is an excellent method, and we haven’t seen any others that are better.”
ADVERTISEMENT
That’s a significant proclamation, given the degree to which citation-based rankings drive hiring and promotion decisions at universities, and grant allocations by funding agencies. Citation counts alone can vary widely between disciplines. And the most popular journal-wide ranking methodology, known as journal impact factor, is facing growing skepticism due to the wide variety of articles that can appear in a single publication.
Impact factor’s dominance suffered a major hit in July, when a team of publishers posted an analysis to the bioRxiv preprint database showing that most published papers typically attract far fewer citations than their journals’ impact-factor rankings. At both Science and Nature, about 75 percent of published articles attract fewer citations than their journal-wide impact factors of 34.7 and 38.1, respectively. Such findings helped prompt the American Society for Microbiology, the world’s largest life-science society, to announce it would stop using impact factor in promoting its journals.
Impact factor “is a too-dominant metric,” said Ludo Waltman, a bibliometrics researcher at Leiden University, in the Netherlands. “There is too much in today’s scientific system that depends on the JIF. This is not a healthy situation.”
RCR, however, has its own shortcomings, Mr. Waltman said. One of the most glaring, he said, is that its complicated system for weighting networks of papers that cite other papers is field-specific, so it appears to discount the value of interdisciplinary science.
Mr. Santangelo said that complaint, which had been lodged against a previous version of RCR, has since been largely resolved. “We demonstrate that the values generated by this method strongly correlate with the opinions of subject-matter experts in biomedical research, and suggest that the same approach should be generally applicable to articles published in all areas of science,” he and his team of NIH analysts wrote on Tuesday in a report published by the open-access journal PLOS Biology.
ADVERTISEMENT
Mr. Waltman said he does believe that it’s necessary for universities and funders to use statistics to measure the value of published science. He has developed his own standard, known as source normalized impact per paper, or SNIP. But that is also a “quite complex metric,” Mr. Waltman said, and neither SNIP nor any other measurement should be used as the sole basis for gauging the scientific value of research.
As a publisher, PLOS shares that point of view. PLOS recognizes the need for “a new, robust quantitative metric that focuses on the article” rather than the journal in which it appears, said David Knutson, a spokesman for PLOS. And PLOS agrees with Mr. Waltman on the need for even more alternative methods of assessment, Mr. Knutson said. “Metrics should support, not replace, expert judgment,” he said.
Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.
Paul Basken was a government policy and science reporter with The Chronicle of Higher Education, where he won an annual National Press Club award for exclusives.