> Skip to content
FEATURED:
  • The Evolution of Race in Admissions
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
ADVERTISEMENT
Publishing
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Better Than Impact Factor? NIH Team Claims Key Advance in Ranking Journal Articles

By  Paul Basken
September 7, 2016

George Santangelo, director of the NIH’s Office of Portfolio Analysis: “Our claim is that this is an excellent method, and we haven’t seen any others that are better.”
NIH
George Santangelo, director of the NIH’s Office of Portfolio Analysis: “Our claim is that this is an excellent method, and we haven’t seen any others that are better.”

Universities and funding agencies typically measure the value of published research by the number of citations an article attracts or by how often the journal in which it appears is cited. Both methods have long been accepted as imperfect but necessary shorthands.

Going beyond pure citation numbers to assign value to an individual article can be both complicated and uncertain. But one leading attempt to do just that took a major leap forward on Tuesday with the formal endorsement of a team of analysts at the National Institutes of Health.

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

George Santangelo, director of the NIH’s Office of Portfolio Analysis: “Our claim is that this is an excellent method, and we haven’t seen any others that are better.”
NIH
George Santangelo, director of the NIH’s Office of Portfolio Analysis: “Our claim is that this is an excellent method, and we haven’t seen any others that are better.”

Universities and funding agencies typically measure the value of published research by the number of citations an article attracts or by how often the journal in which it appears is cited. Both methods have long been accepted as imperfect but necessary shorthands.

Going beyond pure citation numbers to assign value to an individual article can be both complicated and uncertain. But one leading attempt to do just that took a major leap forward on Tuesday with the formal endorsement of a team of analysts at the National Institutes of Health.

Known as relative citation ratio, it works by counting an article’s network of citations, then weighting the result by using a comparison group within the article’s field. The developers of relative citation ratio said its methodology therefore better reflects how experts assess the influence of a paper, rather than just its total number of citations.

“In that context,” said one of the ratio’s developers, George M. Santangelo, director of the NIH’s Office of Portfolio Analysis, “our claim is that this is an excellent method, and we haven’t seen any others that are better.”

ADVERTISEMENT

That’s a significant proclamation, given the degree to which citation-based rankings drive hiring and promotion decisions at universities, and grant allocations by funding agencies. Citation counts alone can vary widely between disciplines. And the most popular journal-wide ranking methodology, known as journal impact factor, is facing growing skepticism due to the wide variety of articles that can appear in a single publication.

Impact factor’s dominance suffered a major hit in July, when a team of publishers posted an analysis to the bioRxiv preprint database showing that most published papers typically attract far fewer citations than their journals’ impact-factor rankings. At both Science and Nature, about 75 percent of published articles attract fewer citations than their journal-wide impact factors of 34.7 and 38.1, respectively. Such findings helped prompt the American Society for Microbiology, the world’s largest life-science society, to announce it would stop using impact factor in promoting its journals.

Impact factor “is a too-dominant metric,” said Ludo Waltman, a bibliometrics researcher at Leiden University, in the Netherlands. “There is too much in today’s scientific system that depends on the JIF. This is not a healthy situation.”

RCR, however, has its own shortcomings, Mr. Waltman said. One of the most glaring, he said, is that its complicated system for weighting networks of papers that cite other papers is field-specific, so it appears to discount the value of interdisciplinary science.

Mr. Santangelo said that complaint, which had been lodged against a previous version of RCR, has since been largely resolved. “We demonstrate that the values generated by this method strongly correlate with the opinions of subject-matter experts in biomedical research, and suggest that the same approach should be generally applicable to articles published in all areas of science,” he and his team of NIH analysts wrote on Tuesday in a report published by the open-access journal PLOS Biology.

ADVERTISEMENT

Mr. Waltman said he does believe that it’s necessary for universities and funders to use statistics to measure the value of published science. He has developed his own standard, known as source normalized impact per paper, or SNIP. But that is also a “quite complex metric,” Mr. Waltman said, and neither SNIP nor any other measurement should be used as the sole basis for gauging the scientific value of research.

As a publisher, PLOS shares that point of view. PLOS recognizes the need for “a new, robust quantitative metric that focuses on the article” rather than the journal in which it appears, said David Knutson, a spokesman for PLOS. And PLOS agrees with Mr. Waltman on the need for even more alternative methods of assessment, Mr. Knutson said. “Metrics should support, not replace, expert judgment,” he said.

Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.


We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Scholarship & Research
Paul Basken
Paul Basken was a government policy and science reporter with The Chronicle of Higher Education, where he won an annual National Press Club award for exclusives.
ADVERTISEMENT
ADVERTISEMENT

Related Content

  • The Number That’s Devouring Science
  • The Journal That Couldn’t Stop Citing Itself
  • Search for Best Way to Rate Research Papers Finds No Good Options
  • Academic Publishers Experiment With ‘Altmetrics’ to Track Reach and Impact
  • Researchers and Scientific Groups Make New Push Against Impact Factors
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin