Graduate-level programs were once relatively immune from pressure to define and measure “learning outcomes” for their students. But for good or ill, the student-learning-assessment movement has begun to migrate from the undergraduate world into master’s and doctoral programs. (At some institutions, there is even talk of defining a set of “foundational outcomes” for all graduate students—that is, a set of learning goals that would be analogous to general-education goals for undergraduates.)
On Wednesday morning, as the annual meeting of the Council of Graduate Schools got under way in Washington, three graduate deans led a workshop on assessing graduate students’ learning and using such assessments to improve programs.
Formal assessment for improvement, they said, is more useful and less painful than many faculty members believe. (And in any case, accreditors are insisting on it.)
The three deans sat down for an interview after the workshop.
Q. In doctoral programs with intense mentor-apprentice relationships, the idea of establishing rubrics and other lists of learning outcomes might seem off-key. If I’m a senior professor of comparative literature and I’ve supervised 30 dissertations during my career, I probably know in my bones what successful learning in my program looks like. Why should I be asked to write out point-by-point lists of the skills and learning outcomes that my students should possess?
Charles Caramello, associate provost for academic affairs and dean of the graduate school at the University of Maryland: If you write out lists of learning outcomes, you’re making the invisible visible. That’s really my answer. We’ve all internalized these standards. They’re largely invisible to us. Assessment brings them out into visibility, and therefore gives them a history.
William R. Wiener, vice provost for research and dean of the graduate school at Marquette University, who is currently dean in residence at the Council of Graduate Schools: There’s no way to aggregate and to learn unless you’ve got some common instruments. By having common instruments, we can see patterns that we couldn’t see before.
James C. Wimbush, dean of the University Graduate School at Indiana University: Part of the story has to do with the external enviroment. Because of the decrease in funding for state institutions, because of political pressures from state legislators, we are forced to be much more accountable. Our boards of trustees now are looking for more accountability. They don’t necessarily say, “We want to make sure that you’re doing assessments of graduate programs.” But they’re questioning, Do we have too many graduate programs? We have to do a better job of being accountable for how we use our resources from the state and elsewhere. Assessment is one of the ways of doing that.
William Wiener: And not only at public institutions. My Board of Directors asks the same questions.
Charles Caramello: Faculty care about standards. They really care about excellence. They really care about evaluation, and they really care about peer review. To the extent that you can say, Look, assessment is a form of all of these things—it’s not alien to what you do every day. It’s another name for it, and a slightly different way of doing it. And the great advantage of it is that it gives you a way to aggregate information, and therefore to see patterns.
Q. What about graduate programs that are now being asked to do student-learning assessments for two accrediting bodies? An engineering program, for example, might now be expected to do student-learning evaluation both for the specialized engineering accreditor and for its university’s regional accreditor.
James Wimbush: Yes, that happens. The school of education, the school of business—they have very rigid accreditation standards from their associations. They tend to focus on meeting those particular criteria.
Charles Caramello: But those programs tend to come on board most quickly with student-learning assessment because for them this is familiar. One important thing that we try to do at Maryland is not ask these programs to do the same thing twice. If they’re already using an assessment model for their specialized accreditor, we don’t want to tell them that they have to create a second model. We’ll find a way to work with them.
William Wiener: But sometimes there are elements that are missing. The outside accreditors are concerned with their own standards. They’re not always so concerned with the mission of the university.
Q. Once a university has developed learning goals for its graduate programs and has been through several cycles of assessment, how public do you want to make that information?
William Wiener: I think it should be public. I think it will give our public confidence in what we’re doing. I think the universities are afraid. But I think that will change. Where a program is low, so be it. What’s important is, Do they improve over time? And if you don’t start with something, you’re not going to go to the next level.
Charles Caramello: Programs are wary, and with some reason. You can’t create a situation where a program is shamed. Publicly, the message to put forward is, This is what we’ve discovered, and this is what we’re doing to improve. That’s useful to students, it’s useful to prospective students, it’s useful to the faculty in the program, it’s useful to the dean and provost. And that’s a real form of accountability. That’s not numbers. It’s “We found this problem. We’re going to fix this problem.” And then you can look two or three or five years later and see. Has the problem been fixed?