I have been following with considerable interest the recent controversies at the University of Texas at Austin and Texas A&M over the collection of data purportedly measuring faculty productivity. That’s because I was, for almost 20 years, director of the National Study of Instructional Costs and Productivity at the University of Delaware, the tool of choice for collection of detailed data on faculty teaching loads, instructional costs, and externally financed scholarly activity. The Delaware Study, as it is known, has included more than 600 four-year institutions since its inception, and provides participating institutions with information on which categories of faculty are teaching which levels of students and at what cost.
POLITICS AND THE UNIVERSITY: ▶ Views From Experts on Six Campuses
I mention these specifics because attention to detail, along with patience, is crucial to the successful collection of data for the purpose of measuring faculty productivity. From the very start, institutions that participated in the Delaware Study were required to accept the caveat that study data should be viewed over time, not just in a single year. (Idiosyncratic data resulting from sabbaticals and other paid leaves can affect both teaching loads and instructional costs in any given year.) And when they are viewed, those data should not be used to reward or penalize academic disciplines. Rather, they should be used as a tool of inquiry for framing discussions about why individual institutional results are similar to or different from the national benchmark data.
After all, faculty do a great deal more than teach, and faculty productivity embraces a great deal more than can be captured in a “student credit-hours taught per faculty” ratio. Colleges must consider the qualitative dimensions of out-of-classroom faculty activity, particularly in the fine arts, social sciences, and humanities, where there are little data about external support to provide context for teaching loads and instructional costs.
Because of this need for broader contextual information, the University of Delaware’s study collects data on a broad range of out-of-classroom faculty work that can affect both the amount of teaching done and associated instructional costs, including the number of undergraduate and graduate-student advisees, the number of thesis and dissertation committees served on and/or chaired, the number of course curricula designed or redesigned, and so on. It monitors, among other things, the number of manuscripts submitted and published, juried shows/commissioned performances/invited presentations or readings, grant proposals prepared and financed, and patents applied for and awarded. Additionally, it collects information on service to the institution, the profession, and the community. As with the teaching load/cost portion of the study, this information provides a context for more fully understanding the teaching loads and associated direct expenses within the discipline.
Indeed, analysis at the disciplinary level is an important feature of any study that sets out to measure faculty productivity. In analyzing multiple years of Delaware Study data in a major study for the National Center for Education Statistics, we started with the operating hypothesis that an institution’s Carnegie classification would be a major factor: Research universities would teach less and cost more than doctoral universities, which would teach less and cost more than those that do not confer doctoral degrees, with baccalaureate institutions costing the least and teaching the most. In fact, more than 80 percent of the variation in total instructional costs at four-year institutions is explained by the disciplinary mix that comprises an institution’s curriculum.
An institution like the University of Delaware, for example, which is heavily invested in the hard sciences and engineering, will have a higher direct instructional expense per student than a comparable institution that is more heavily invested in the humanities and social sciences, disciplines that are less equipment-intensive and lend themselves to larger class sizes.
In Texas, though, nearly all the data elements being collected are what we call input measures—faculty salary, number of courses taught, course enrollments, student credit-hour production, average grade awarded. This limited viewpoint can tempt universities to increase productivity by ramping up the number of courses and credit-hours taught and reducing the number of faculty doing the teaching. But are we prepared to give up all of the other dimensions of faculty activity? If faculty don’t advise students, are we prepared to hire professional staff in their place? If the only research that counts is externally financed, are we prepared to give up the contributions of faculty to studio art, music, theater, and literature? I suspect not, even in Texas.
As educators, we should be far less focused on how many courses and credit-hours faculty teach, and far more concerned with seeing a variety of measurements, not simply a standardized test, of how much students are learning. The good news is that hundreds of colleges throughout the country are increasingly embracing such measurement as an essential component of faculty productivity. I can only hope that our colleagues in Texas take note of this. Yes, measuring productivity is important—but let’s do it right.