Faculty members are used to having to prove that their scholarship has value. But at a time when higher education is under sharp scrutiny, a growing number of professors are turning to alternative metrics, or altmetrics, to help make their case.
Such measures of impact include how often an article is tweeted, blogged about, mentioned in mainstream media, bookmarked, or downloaded or viewed from journal websites. Unlike traditional measures of scholarly influence, such as citations and an academic journal’s impact factor, altmetrics can capture the online reach of scholarly work.
Jason Priem coined the term “altmetrics” — in a tweet, of course — in 2010, when he was a doctoral student in information science at the University of North Carolina at Chapel Hill. He went on to help found Impactstory, an open-source website that gathers metrics on research products like articles, software, and data sets. Now on leave from his graduate studies, he’s known as one of the authors of a “manifesto” on altmetrics, published not long after that stage-setting tweet.
Mr. Priem recently talked with Audrey Williams June, of The Chronicle, about the growth of altmetrics, the factors driving its popularity, and the future of the field. Here is an edited version of their conversation.
Q. What signs do you see that interest in altmetrics has grown since 2010?
A. There have been hundreds of papers written about altmetrics, there are major workshops on altmetrics, there are postdocs who study altmetrics and major grants awarded to study altmetrics. The study of altmetrics has really burgeoned as a field of academic inquiry.
This special report examines several workplace issues where strong communication is key, including anxiety over “campus carry” laws that allow students in some states to bring guns to class and a growing faculty effort to seek new ways of demonstrating the value of scholarly work. Read more.
Publishers have really bought into the idea of altmetrics. A majority of journal articles have some altmetric component to them. You can see how many times articles have been tweeted or downloaded or they have the donut [an icon that holds a numeric score given to an article based on the quality and quantity of attention that it gets]. They’ve responded to the interest their authors have in altmetrics by including a lot of these things.
One of the most exciting things is that administrators are excited about altmetrics. When I first proposed this idea, they weren’t thinking about it very much. Today administrators are banging down our doors asking us, “How can we use altmetrics to better assess the broader impact of our faculty?” Many of them are very interested in meeting the imperative they’re receiving from their university to take broader impact seriously.
Still, for altmetrics to be widely considered as a set of robust indicators of impact, I think it’s going to be a bit of a journey before that happens.
Q. What are you hearing from academics about the metrics used to assess their scholarship?
A. Even though a lot of scholars are still evaluated on the impact factor of the journal that they’re published in, frustration with the impact factor has grown. People want something better, more meaningful, something more scientific to assess their impact. … A lot of them tell me, “I need Google Scholar for my citations.” They also say, “I have to have an Impactstory account for my research.” … They’re taking responsibility for measuring their own impact.
Q. Where do tenure-and-promotion committees stand when it comes to supporting the inclusion of online impact as part of a tenure bid?
A. I’d really like to see hard data on this. Without real good data, which we don’t have, it’s mostly anecdotal. I hear opinions all across the spectrum from “My tenure committee would slaughter me if I even think of mentioning Twitter” to “I’m not so worried about what my tenure committee actually thinks about this; it represents who I am and it’s what I’m proud of, and I’m going to put it in my tenure package.”
Q. People have talked about how easy it would be to game the system when it comes to altmetrics, a concern that still persists. But what concerns have largely disappeared?
A. There’s a lot less people saying, “I’m afraid altmetrics will mean that buzz will become more important than quality.” When people learn a little bit more about altmetrics, they learn that they’re not meant to become the only metric. They’re a component in an ecosystem of metrics. In the manifesto, I would have liked to help remedy people’s fears by emphasizing more that there’s no reason why altmetrics should be thought of as a way of throwing out citations as a tool to assess impact.
Now that we have more data, we can answer questions that were once thought to be unanswerable. What is the public impact of that research immediately after publication? What do top scientists think about it? We’ve also done quite a lot of work to build standards for reporting altmetrics.
Altmetrics are harder than traditional metrics. They’re messier because they’re so much richer. But just because something isn’t completely worked out doesn’t mean it’s not ready to use. Altmetrics right now can, in a qualitative sense, show the impact of research. It can show what people are saying about it. Are they all saying it’s terrible? Are they saying, “I’m a doctor and I’ve used this article in my clinical practice and it cured my patients”? Policy makers are looking for that kind of thing: They want numbers, but they also want stories. We can use altmetrics to find those stories.
Q. What’s down the line, say five years out, for altmetrics?
A. I think it’s quite possible that in the next five years, people won’t even be talking about altmetrics anymore. Instead they’ll talk about metrics — and that’s it. And when they say metrics, what they’ll mean is the entire suite of ways to measure impact in a responsible and multidimensional way. I would love for the “alt” in metrics to just disappear.