Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    AI and Microcredentials
Sign In
News

Scholars Seek Better Metrics for Assessing Research Productivity

By David Glenn December 16, 2009
Washington

Evaluating scholars simply by tallying their citations is “like saying Britney Spears is the most important artist who ever existed because she’s sold 50 million records,” said Johan Bollen, an associate professor of informatics and computing at Indiana University at Bloomington, as he introduced a daylong workshop on academic metrics here on Wednesday.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Evaluating scholars simply by tallying their citations is “like saying Britney Spears is the most important artist who ever existed because she’s sold 50 million records,” said Johan Bollen, an associate professor of informatics and computing at Indiana University at Bloomington, as he introduced a daylong workshop on academic metrics here on Wednesday.

“That’s not how we do things in the real world,” Mr. Bollen continued. “And I don’t think we should do things like that in scholarly assessments either. … We need to find ways to take trust, prestige, and influence into account.”

But while it is easy to criticize simplistic measures of citations and “impact factors,” it is not so easy to find agreement about how to improve those metrics. Faculty members sometimes suggest that evaluators should de-emphasize numbers and instead look qualitatively at how research projects affect the public good. But a version of that proposal might be put into practice in Britain, and the idea is now causing anger and anxiety among scholars there.

At Wednesday’s workshop, roughly a dozen researchers, publishers, and agency officials described a wide range of new assessment schemes, most of which are still in the formative stages.

The workshop was organized by Mr. Bollen and supported by a grant from the National Science Foundation. It was held in conjunction with a meeting of the Coalition for Networked Information, a group of libraries and universities concerned with scholarly communication.

A Measure of ‘Usage’

Mr. Bollen began the day by describing Mesur, a research project that has compiled data about hundreds of millions of “usage events"—that is, page views and downloads—from several major scholarly publishers and databases.

By using network analysis, Mr. Bollen said, he and his colleagues have been able to estimate which scholarly articles and journals are truly central to the flow of information. They have also experimented with dozens of different measures of scientific impact that might be derived from their data.

They have also mined their database to create a “map of science” that visually describes the flow of information among various disciplines.

The usage data harvested by Mesur and similar projects can be a powerful tool—but that information needs to be interpreted intelligently, said Michael J. Kurtz, an astronomer at the Harvard Smithsonian Center for Astrophysics.

Roughly half of the online page views of research articles in astronomy are clearly generated by scholars, Mr. Kurtz said, because the readers arrive via Harvard’s Astrophysics Data System, an academic search portal. But roughly half of the page views are generated by Google searches, and the people who used that tool often seem to be nonscholars who arrived at the articles more or less randomly.

ADVERTISEMENT

“There is virtually no correlation between the behavior of the two groups of readers,” Mr. Kurtz said. So if researchers are going to be rewarded because 50,000 people read one of their scholarly articles, should the evaluators worry about how many of those readers seem to be scientists? Mr. Kurtz did not offer an opinion, but said that question would need to be debated as open-access databases take root.

Analyses of Journals’ Strengths

Jevin D. West, a doctoral candidate in biology at the University of Washington at Seattle, described Eigenfactor, another project that seeks to use network analysis to assess the strengths of various journals. (The project’s Web site features an array of elaborate graphics and top-25 lists.)

“There is no reason at this point for us simply to be counting citations. We have the computational resources and the conceptual advances in network science” to do much better analyses, Mr. West said.

He added that he and other biologists are interested in analyzing scholarly information flows because they might provide a model for studying other huge, complex, dynamic systems, including ecological systems and genomic databases.

ADVERTISEMENT

A much simpler project was described by Jorge E. Hirsch, a professor of physics at the University of California at San Diego. Four years ago, Mr. Hirsch proposed what he called an “h index” to replace traditional measures of citations and research productivity.

The idea quickly caught fire, but it has also been widely criticized, in part because of the difficulty of treating papers with multiple authors. (Last month Mr. Hirsch proposed an alternative measure, hbar, that he says would deal with that problem.)

“One flaw in all of these measures,” Mr. Hirsch said, “is that no bibliometric measure will do a good job picking up very novel, nonmainstream research until it becomes mainstream.” But there is probably no good way to fix that problem, he added; it is just something that scholars will have to live with.

Peter Binfield, the managing editor of PLoS ONE, a major online open-access journal, described the steps that his journal has taken to measure the impact of its work. Every article in a PLoS journal features a “metrics” tab that reveals how often the article has been viewed, how often it has been cited, how many readers have tagged the article on social-bookmarking sites, and how many blogs have discussed the article.

ADVERTISEMENT

Each of those measures is imperfect, Mr. Binfield said, but there is no reason for publishers not to experiment with them.

Mr. Binfield said it is too soon to say whether those measures will be embraced. “Will the public understand and trust these numbers?” he asked. “Will the scholarly community adopt them? Will promotion-and-tenure committees start to look at them? Will people quote these figures on their CV’s?”

Those questions probably apply to all of the projects discussed here Wednesday.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Scholarship & Research
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
David Glenn
David Glenn joined The Chronicle of Higher Education in 2002. His work explored how faculty members are trained, encouraged, and evaluated as teachers; how college courses and curricula are developed; and the institutional incentives that sometimes discourage faculty members from investing their energy in teaching.
ADVERTISEMENT
ADVERTISEMENT

Related Content

Impact Factors Run Into Competition

More News

Photo illustration showing Santa Ono seated, places small in the corner of a dark space
'Unrelentingly Sad'
Santa Ono Wanted a Presidency. He Became a Pariah.
Illustration of a rushing crowd carrying HSI letters
Seeking precedent
Funding for Hispanic-Serving Institutions Is Discriminatory and Unconstitutional, Lawsuit Argues
Photo-based illustration of scissors cutting through paper that is a photo of an idyllic liberal arts college campus on one side and money on the other
Finance
Small Colleges Are Banding Together Against a Higher Endowment Tax. This Is Why.
Pano Kanelos, founding president of the U. of Austin.
Q&A
One Year In, What Has ‘the Anti-Harvard’ University Accomplished?

From The Review

Photo- and type-based illustration depicting the acronym AAUP with the second A as the arrow of a compass and facing not north but southeast.
The Review | Essay
The Unraveling of the AAUP
By Matthew W. Finkin
Photo-based illustration of the Capitol building dome propped on a stick attached to a string, like a trap.
The Review | Opinion
Colleges Can’t Trust the Federal Government. What Now?
By Brian Rosenberg
Illustration of an unequal sign in black on a white background
The Review | Essay
What Is Replacing DEI? Racism.
By Richard Amesbury

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: a Global Leadership Perspective
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin