I had a premonition of a metric future the other day as I listened to Alice Mitchell, a professor at Wannabe University, give an account of why she deserved tenure. (Her name is a pseudonym; Wannabe University is a flagship university where I conducted participant observation for more than six years.) Over the years, I’ve listened to innumerable assistant professors assess their chances of getting tenure. Usually they have worried, Did I publish enough in the right places?
In the 1970s and 1980s, my friends and I also talked about how much we had written and where it had appeared, but we discussed why our work was important, too. Alice didn’t tell me about the topics of her research; instead she listed the number of articles she had written, where they had been submitted and accepted, the reputation of the journals, the data sets she was constructing, and how many articles she could milk from each data set. The only number she forgot to supply was the impact ranking of each journal to which she had submitted an article. An impact ranking is, of course, an estimate of the influence that articles in the journal might be expected to have on a field (as measured by citations) and is not to be confused with the NFL power rankings published weekly during the football season.
Alice’s analysis reminded me that colleges and universities have transformed themselves from participants in an audit culture to accomplices in an accountability regime. The term “audit culture” refers to rituals of verification that measure whether and how institutions and individuals engage in self-policing, much as a diabetic pricks her finger to learn her blood-sugar level. Besotted with rituals that are characteristic of the corporate world, higher education has inaugurated an accountability regime—a politics of surveillance, control, and market management that disguises itself as value-neutral and scientific administration. In this emerging academic world, audits have consequences (for an individual, if you don’t pass the tenure audit, you lose your job), honor resides in being No. 1—or, for an institution, at the very least in the top 25 of whatever group has been identified as yours—and, to quote the sociologist Troy Duster about my research, “every individual and unit strives and claims to be well above average.” At Wan U., attempts to improve anything melt and puddle into a list of numbers.
Alice had mastered one requisite of the accountability regime. She had transformed herself into an auditable commodity comprising so many measurable skills. However, she had lost track of why she was being audited. Supposedly her bosses, up the bureaucratic chain, wanted to know what kind of contributions she might make over the years. Would she devote her life to answering questions that matter? Did her students find her lectures and seminars enthralling, or at least not too boring? Was she a good citizen, a team player happy to sit through interminable committee meetings dedicated to the common good? Did she realize that there is such a phenomenon as the common good—whose characteristics are captured by the metrics on research, teaching, and service encapsulated in the latest strategic plan? Those matters, Alice seemed to feel, could be captured in metrics like submissions to academic journals.
Unhappily, Alice is not the only person in higher education who has embraced commensuration, the process of attributing meaning to measurement. Annually, other job and tenure candidates list how many articles and books they have published, how many talks they have delivered (including how many to which they were invited, and by whom), how many students they have advised and taught. Now and again, senior professors, writing letters to evaluate a candidate’s suitability to get or keep a job, provide their own lists. Sometimes they, too, are so intent on constructing them that they forget to discuss a candidate’s intellectual contributions. Last year, when presenting a distinguished-research award, a top Wannabe administrator noted that the recipient had published well more than 100 articles. He never said why those articles mattered.
So, too, administrators elevate student evaluations of teaching, even if they don’t know what those mean. Here’s how a Wan U. vice provost explained the importance of scores on Student Evaluation of Teaching Instruments: When making decisions about tenure, he related, “we might be looking at two people with similar research records, but one is said to be a good teacher and the other, not. And all we have are numbers about teaching. And we don’t know what the difference is between a [summary measure of] 7.3 and a 7.7 or an 8.2 and an 8.5.”
The problem is that such numbers have no meaning. They cannot indicate the quality of a student’s education. Nor can the many metrics that commonly appear in academic (strategic) plans, like student credit hours per full-time-equivalent faculty member, or the percentage of classes with more than 50 students. Those productivity measures (for they are indeed productivity measures) might as well apply to the assembly-line workers who fabricate the proverbial widget, for one cannot tell what the metrics have to do with the supposed purpose of institutions of higher education—to create and transmit knowledge. That includes leading students to the possibility of a fuller life and an appreciation of the world around them and expanding their horizons.
Iinterpret many of the metrics in strategic plans as an intention to educate, much as buying a contract at a fitness club may be understood as an intention to exercise. However, most fitness clubs make their profit by taking money from customers who come once or twice, usually just after they have signed their contracts. (If those customers worked out regularly, the club would need to hire more staff members and buy more machines; there’s no profit in that.) Most strategic plans proclaim their aim to increase the quality of education as described in a mission statement. But, like the fitness club’s expensive cardio machines, a significant increase in faculty research, in the quality of student experiences (including learning), in the institution’s service to its state, or in its standing among its peers may cost more than a university can afford to invest or would even dream of paying.
The very term “increase” implies measurement, as in such goals as an increase in student credit hours per full-time-equivalent faculty member, four- and six-year graduation rates, the number of master’s and doctoral degrees per faculty member, the number of postdoctoral students per faculty member, the number of publications per faculty member, and, of course “external research expenditures” ($) per faculty member—outside funds that each researcher brings in and spends. Such metrics are a speedup of the academic assembly line, not an intensification or improvement of student learning. Indeed, sometimes a boost in some measures, like an increase in the number of first-year students participating in “living and learning communities,” may even detract from what students learn. (Wan U.'s pre-pharmacy living-and-learning community is so competitive that students keep track of one another’s grades more than they help one another study. Last year one student turned off her roommate’s alarm clock so that she would miss an exam and thus no longer compete for admission to the School of Pharmacy.)
Even metrics intended to indicate what students may have learned seem to have more to do with controlling faculty members than with gauging education. Take student-outcomes assessments, meant to be evaluations of whether courses have achieved their goals. They search for fault where earlier researchers would not have dreamed to look. When parents in the 1950s asked why Johnny couldn’t read, teachers may have responded that it was Johnny’s fault; they had prepared detailed lesson plans. Today student-outcomes assessment does not even try to discover whether Johnny attended class; instead it produces metrics about outcomes without considering Johnny’s input.
Here’s how one Wan U. professor explained it to me: “It’s like the students are being processed the way you process hot dogs. You take raw material, and you put it on an assembly line. You check for defective hot dogs. Almost all the hot dogs are good. When one is defective, you ask how to change the process. You don’t try to figure out what went wrong with the raw materials you were assembling. Not with this kind of continuous quality control.” That kind of evaluation does not even pretend to ask questions about a student’s preparation, class attendance, or study habits—in short, what the student is like as a student. The analogy to the processing of hot dogs (or TV’s or cars) also reminds us that administrators are assuming control of the curriculum, for managers set the standards for the assembly line. They decide how work is to be done. Student-outcomes assessment announces the existence of an audit culture that has run amok and become an accountability regime.
The emergence of an accountability regime generates a list of questions: Will nothing halt the corporate university’s insistence on subordinating knowledge to money? Will changes in higher education resemble accounting processes now characteristic of health care and legal practice? What do such alterations in key institutions, like colleges and universities, that help establish the underpinnings of our culture, tell us about how contemporary society is changing?
For those of us working in higher education, the key question may be quite practical: Why aren’t more professors resisting, as administrative attempts to cope with the Great Recession make ever clearer the increasing strength of the accountability regime?