When David V. Rosowsky became provost of the University of Vermont, in 2013, he wanted it to begin competing more aggressively for students, and to do so nationally. “The regional-market basket model is no longer going to meet all of our needs,” he says.
Data on faculty productivity were needed not only to help strengthen programs but also to place Vermont in the context of peer institutions, he says. Colleges face intense competition for students, and must use data, and not just anecdotes, to show how they stand out, Mr. Rosowsky argues.
The provost didn’t want to rely on “canned third-party metrics,” he says, so he started a campuswide discussion about what data could responsibly be used to measure scholarly productivity. He asked deans and departments across the university to develop their own forms of metrics. The effort met some resistance, but not the acrimony and distrust often created when administrators try to quantify creativity.
“I was cognizant that ‘metrics’ is a charged word,” Mr. Rosowsky says. “I was very careful to say they are not one-size-fits-all, and some of these should be discipline-specific.”
Allowing the academic community to take over the process was “absolutely crucial” to its success, says Paul Deslandes, chair of the history department. “People in the humanities always resist these efforts because they see it as an imposition of a model from another area of scholarship, namely the sciences. But now that we’ve come up with a document that very much pays attention to what people in the humanities actually do, people are more comfortable with it.”
The history department, for example, decided to measure faculty productivity with metrics that include peer-reviewed journal articles, books, and research money, but also measurements more specific to the discipline: museum exhibitions, documentary- film advising, and digital archiving.
Some colleges, including the business school, created a tiered ranking of journals — a faculty member would receive eight points for being published in a top-ranked journal, down to one point for a “fourthtier” journal. Accruing more points would mean reduced course loads. Earn 24 points — three publications in a top-tier journal — and see your teaching expectation for the year drop from five to three courses. Earn fewer than six points, however, and you might get a sixth course to teach.
Other programs at Vermont counted the number of Ph.D. recipients produced or graduate students’ success on the job market. The College of Engineering and Mathematical Sciences, which was already using the number of Google Scholar citations in its annual evaluations, planned to consider, among other metrics, coverage of faculty research in the popular press when determining its teaching loads.
While the university subscribes to Academic Analytics, a company that benchmarks faculty-productivity metrics, departments aren’t required to use it, Mr. Rosowsky says. “Some departments use it because it’s robust and close to being complete. Others will find it’s neither robust nor complete in their disciplines and will look elsewhere.”
Mark Usher, chair of the classics department at Vermont, says trying to quantify creativity isn’t wise. He gives credit to Mr. Rosowsky, however, for being transparent and involving faculty members in the process. The communication, he says, “kept the natives from being too restless here.”
An absence of restlessness doesn’t necessarily result in enthusiasm, however. Will the metrics be used in a meaningful way that will allow programs to improve? Mr. Rosowsky hopes so, but for an answer, he says, check back in a few years.