An effort to measure academic units passes the test at Oklahoma State U.
By Michael Anft
September 10, 2017
When Bret Danilowicz was hired five years ago as dean of Oklahoma State University’s College of Arts and Sciences, he decided to learn how well each of its 24 departments was doing. He created a measure of the effectiveness of programs as varied as fine arts and physics, using criteria including how much in philanthropic gifts they attract and how well students learn.
He knew he’d run into resistance. Similar efforts at other universities, usually spurred by tightening budgets or tax-conscious legislators, and aided by a flood of newly available data on student outcomes and faculty-research productivity, have traditionally been received with angst by department chairs and faculty members. Department-level metrics, often used to compare each unit against the others, can loom like a threat, eliciting anxiety and questions.
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page.
You may then be asked to log in, create an account if you don't already have one,
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com
When Bret Danilowicz was hired five years ago as dean of Oklahoma State University’s College of Arts and Sciences, he decided to learn how well each of its 24 departments was doing. He created a measure of the effectiveness of programs as varied as fine arts and physics, using criteria including how much in philanthropic gifts they attract and how well students learn.
He knew he’d run into resistance. Similar efforts at other universities, usually spurred by tightening budgets or tax-conscious legislators, and aided by a flood of newly available data on student outcomes and faculty-research productivity, have traditionally been received with angst by department chairs and faculty members. Department-level metrics, often used to compare each unit against the others, can loom like a threat, eliciting anxiety and questions.
So, when Mr. Danilowicz got the program off the ground, in 2013-14, Oklahoma State faculty members were predictably lukewarm. A music professor wondered how his department, with its emphasis on costly, one-on-one instrument lessons and relative lack of research dollars, would be compared with hard-science departments flush with grant money. (Six departments at the college bring in 85 percent of the grants and gifts.) The political-science chair asked about the time and energy a metrics process might take from other departmental work.
“We were concerned by words like ‘quantifying,’ " says Jennifer Borland, an associate professor of art history. “Our department was disinclined to think it could fit neatly into any kind of fair measurement process.”
Now, with the college entering year five of its annual departmental evaluation, Ms. Borland and others say that Mr. Danilowicz’s plan — one that is unique at Oklahoma State and rare nationwide — has measured up. It has forced units to take a hard look at what they do, helped them sharpen their goals, and encouraged them to seek out precise data that can justify how they operate.
And by focusing on using the metrics process to help each department get better and come in line with the university’s strategic goals, the process has largely avoided the apples-to-oranges comparisons that can worry heads of departments that don’t generate lots of money.
After some initial nail-biting, Oklahoma State’s professors have been won over.
“When Bret arrived, he generated a clearly stated rubric of what constituted successes versus failures,” says William Picking, formerly chair of the department of microbiology and molecular genetics. (He is now a professor of pharmaceutical chemistry at the University of Kansas.)
Before Mr. Danilowicz’s tenure, there was no system in place at Oklahoma State for rewarding departmental improvement with more resources, or for giving department chairs guidance on how to improve, Mr. Picking says. The plan changed that: “It gave me confirmation that my goals were on the right track.”
ADVERTISEMENT
After Mr. Danilowicz joined the university and scanned the landscape for ideas, he was dissatisfied with the typical approach to applying a unit-by-unit yardstick.
Universities typically measure the productivity of their departments every five years or so, often as part of the accreditation process. In many cases, academic units report to provosts or presidents regularly during the interim and include data on such measures as enrollment, faculty workload, student retention, and grant dollars.
Our department was disinclined to think it could fit neatly into any kind of fair measurement process.
Departments deemed to be operating at a high level are often rewarded with more resources, which they can use to finance improvements in facilities, purchases of equipment, new faculty positions, or additional teaching assistantships.
Mr. Danilowicz thought there was a better way, starting with getting lower-level administrators, department heads, and faculty members to participate in the assessment process every year.
“To me, the president and provost are too high a level for this,” he says. “If the goal is to give your departments a chance to take a good look at themselves, and with an eye toward improvement, you need to get faculty and chairs more involved in the process.”
ADVERTISEMENT
A marine biologist by training, Mr. Danilowicz had served as dean of science and technology at Georgia Southern University. Now he wanted to understand the wider range of disciplines he was sizing up at Oklahoma State. It was important, he thought, to develop qualitative measures, not just numbers that could be mashed up.
“I’m a scientist, so grants and publications were very important to me,” he says. But as he talked with chairs and professors outside the sciences, he saw that each discipline brought its own yardstick.
“I came to learn that people in humanities want to review the quality of their scholarship. And arts people value creativity, which is really hard to measure,” he says. “The more I learned, the more it seemed natural to have the departments develop their own criteria and do their own assessments, and for my office to give them my thoughts on what they come up with.”
The process needed some kind of standardization, however. Even if an artist can’t point to research grants, or a scientist can’t add up exhibits or performances, each should have comparable ways to quantify their output. But that isn’t easy, Mr. Danilowicz says. It took him two years to develop a set of seven measures that can be understood and used by everyone from experimental chemists to theater instructors.
The measures are: ethnic and gender diversity among faculty members and students; faculty workload; the size and vitality of the department’s programs; philanthropy; research; student learning; and student retention. Department heads are asked to consider each of those categories by looking at their data, expectations, format, and objectives.
ADVERTISEMENT
The idea, Mr. Danilowicz says, is to create a matrix that develops enough information to determine how a department is doing, how it can get better, and how administrators can help it get there.
By gathering information and working to overcome shortcomings, departments can avoid some pain that can come during times of belt-tightening, say those who have taken part in the process.
“These data enable departments to anticipate potential issues and be very proactive about addressing them,” says Laura Belmonte, who heads Oklahoma State’s history department.
During a time when many universities, particularly public ones, are cash-strapped, such knowledge can help departments head off cuts that can do damage to faculty staffing and programs.
Two years ago, Oklahoma State saw its annual infusion of state cash reduced by about $2 million, to $120 million. Cuts had to be made. “The departments that had showed measurable improvement were in a position to make a strong case,” Mr. Danilowicz says. They sustained lesser cuts than units that hadn’t shown such improvement.
ADVERTISEMENT
Despite the continuing cash crunch, a department’s ability to raise money from outside sources hasn’t been part of the cost-cutting equation.
“We do look across departments comparatively to see how they are coming with their plans for improvement, but we don’t compare them when it comes to generating revenue,” he says.
The unruffled experience at Oklahoma State’s Arts & Sciences is unusual, observers say. Departmental-metrics plans still encounter resistance. Some deans say that there are no “unified metrics” that can be fairly applied across disciplines.
“There’s increasing pressure for us to measure our productivity and quality, but we’re a diverse school that features everything from clinicians who see patients to screenwriters to radio journalists,” says Jay Bernhardt, dean of the Moody College of Communications at the University of Texas at Austin. “Attaching weights to everything becomes a subjective exercise, not an objective one.”
Like several other deans and faculty members interviewed for this story, Mr. Bernhardt had a horror story of a metrics scheme gone wrong. At one university where he worked, he saw a dean roll out a plan that included an Excel spreadsheet that was 30 pages long. After filling it out, department heads then saw their information reduced to a score that was used to compare their performance against those of other departments.
ADVERTISEMENT
“There was a lot of resentment,” he recalls. “No faculty member I know likes filling out lengthy spreadsheets.”
Instead of putting together a measurement process at Texas, where the university has so far steered clear of metrics-based comparisons, Mr. Bernhardt relies on outside reviews of departments by accreditation groups and peer institutions. “It’s more fair than developing some kind of overarching rubric,” he says.
Leaders at many smaller colleges also blanch at the idea of using a far-ranging set of metrics to size up departments. They cite limited financial resources and a preference for measuring student outcomes across the college.
“We do pay attention to faculty workloads at the departmental level, but we have to look at the wider perspective of what it means to deliver a liberal-arts education,” says Michael Latham, dean of academic affairs at Grinnell College. “If we were to look at quantitative measures alone, we might be put in a place were we’d close a department down.”
Despite such concerns, more colleges are putting metrics plans into action.
ADVERTISEMENT
Administrators need to make more hard decisions these days, and they need the data to back those decisions up.
“Universities are learning that departments are driving student and faculty performance,” says David Attis, managing director of strategic research at EAB, a research company that serves as a consultant to 1,200 colleges and universities. About six dozen of them have asked the company for help in devising yardsticks for academic departments. Many institutions are adding annual departmental reviews to their five-year accreditation evaluations, Mr. Attis says.
“Administrators need to make more hard decisions these days, and they need the data to back those decisions up,” he says. “These programs provide it.”
His company recommends that colleges avoid ranking departments against one another, to avoid unfairness. It suggests that colleges “empower department chairs by giving them data annually so they can make decisions that strengthen their departments.”
EAB cites Mr. Danilowicz’s plan at Oklahoma State as a model. Many of the reports assembled by arts-and-sciences chairs and faculty members there have resulted in concrete changes.
In the music department, where retention of freshmen has been a problem, a new academic adviser was hired by the dean’s office to keep more young students in the program. One report noted that a larger-than-normal number of students in the Music Theory I course withdraw from it. The department was given the money to offer a pre-theory fundamentals class to keep more students.
ADVERTISEMENT
“As soon as departments see these measurement programs as an opportunity to work with the dean’s office, they realize they can really help them,” says Jeffrey Loeffert, an associate professor of music.
Some educators at Oklahoma State note that the transition to an intense annual review hasn’t been seamless. Guidance on how to complete the reports, which typically add up to little more than 10 pages per department each year, is sometimes lacking. For example, dual majors aren’t counted as majors in both departments — only in the one that students list first — potentially depriving, say, history and other humanities departments, often mentioned second by dual majors, of more resources.
And feedback from Mr. Danilowicz — about two pages of suggestions, on average — hasn’t been as timely or expansive as some department chairs and professors would like. Some faculty members say it might be better if they could interact with the dean more often.
“I don’t know how I’d schedule all that,” Mr. Danilowicz responds. “Our process on this is still evolving.”
Still, department chairs generally say they have benefited from the plan.
ADVERTISEMENT
“The metrics have engaged the faculty in a way that it needed to be engaged,” says Jeanette Mendez, chair of political science. “We had just been doing our thing, and then we were encouraged to set goals so we could develop a sturdier structure. It’s given us the tools to get there.”