As promised, this blog will expire with the calendar year (cue violins).
In this last post, I’ve tried to distill some fundamental arguments about assessment and accountability in higher education. I’ve borrowed liberally from comments that readers have left here and elsewhere on The Chronicle’s site. Many thanks to all of you for reading and arguing.
Of course there are more than two sides to these debates. In that respect, what follows is pathetically reductive. But I’ve tried not to put my thumb on the scale on behalf of either of these characters. I’ve tried to convey the strongest cases on each side of an admittedly-crudely-drawn line. (If I’ve failed to do that, you should of course call me out in the comments.)
The scene: Friday, 7:45 p.m. A bar on the outskirts of a moderately selective public university. The décor and the jukebox appeal to disillusioned 36-year-old faculty members and a few graduate students. The fraternities leave this place alone.
Accountability Skeptic: Did you see that memo from the dean today? He’s hired some consultant to teach us how to “design learning outcomes” for our students. I can’t imagine a bigger waste of time and money. And I don’t think the dean even believes in this stuff himself. I think he’s just trying to keep the accreditors off his back.
Accountability Hawk: Don’t be so cynical. Tuition and fees here have gone up by more than 50 percent since 2000. Students are taking on miserable levels of debt to be in our classrooms. They deserve to have faculty members who are focused on their learning—and that means that we need some kind of common understandings in our departments about the knowledge and skills students are supposed to be picking up.
Skeptic: Listen. I do focus on my students. I assess their learning every week. It’s called grading. I get the feeling that the dean wants a list of “learning outcomes” that are so easy or so nebulous that no student will really fail. It’s a joke.
Hawk: That might be true in some places. But some departments have developed very specific, very concrete rubrics that students sure as hell can fail. And when you talk about grades—I know that you take your grading seriously. But you and I both know that grade inflation is a serious problem on this campus. No one fully trusts grades anymore—not students, not accreditors, not employers.
Skeptic: Those are discussions that we should have inside our own departments. If we want to tackle grade inflation, we can do that by ourselves as a faculty. I don’t see what an outside consultant is going to bring to the table other than a lot of jargon. If I’m going to be forced to listen to a consultant lecture about “readiness assurance processes,” I’m going to need a stiff drink before I walk in the room. All of this is just going to add up to a lot of tedious, circular conversations about whether we’re ready to do assessment.
Hawk: What makes you think that we’re going to tackle grade inflation as a faculty? The problem has been growing for decades, and I haven’t seen any faculty uprising to fix it. You know what the incentives are like here. Untenured instructors are paranoid about their students’ course-evaluation scores, so they pump up their grades to keep the students happy. And people on the tenure track feel more pressure to publish research than to excel at teaching. If we start to talk seriously about learning outcomes, that’s our best chance to turn those incentives around. I’m glad the dean is finally paying attention to this.
Skeptic: Don’t you see how this all could backfire? There are a lot of reasons why universities sometimes promote research at the expense of teaching. One of those reasons is that department chairs and deans find research simple to quantify. When someone comes up for tenure, you can say, Ah, she has five publications that have been cited 16 times in all. It’s very simple to compare across faculty members. Never mind that there are serious problems with interpreting research-productivity numbers. What I’m afraid is that this learning-assessment industry is going to lead to some new kind of easily quantified but essentially bogus measure of faculty members’ performance as teachers.
Hawk: I think you’re letting the perfect be the enemy of the good here. Everyone knows that there are problems with counting publications and citations. But would you want to have a world with no information at all about how productive a faculty member is? It’s the same with measuring student learning outcomes. The tools we’re building will be imperfect, and everyone will know that, but it’s better than flying blind.
Skeptic: Yes, but, again—these are conversations we can have inside our departments and inside our disciplinary associations. I don’t trust the judgment of the “vice provost for assessment” floating around in the upper echelon of our administration. I don’t trust the assessment bureaucrats who work for our regional accreditor. And I don’t trust state or federal regulators. In the grand scheme of things, our university is basically fine. Our six-year graduation rate is 70 percent. Every hour that accreditors and regulators spend hassling us about “learning outcomes” is an hour when they’re not going after the real malefactors: diploma mills and colleges with graduation rates below 20 percent. And the same thing applies at the campus level: Every hour that our deans and provosts spend hassling the faculty about learning assessments is an hour when they’re not going after the handful of instructors who are truly incompetent and irresponsible.
Hawk: Do you really think our campus is basically fine? Really? Hundreds of students drop out here every year, and they mostly walk away with huge debts. A lot of others take seven years to graduate when they really ought to be able to finish in four. There are lots of reasons to believe that colleges like ours could have stronger graduation rates and could do better at making sure students learn actual skills while they’re here. For a start, we could make our credit-transfer policies simpler and more transparent. People in the accreditation agencies are more sophisticated about measuring student learning than you give them credit for. They understand how complicated all this is.
Skeptic: I get nervous these days when I hear people talk about how sophisticated they are at dealing with complicated academic data. Remember the National Research Council’s ratings of doctoral programs? Their statistical model was sophisticated, all right—so sophisticated that almost no one could understand the ratings. And the way they tallied research productivity turned out to be a serious problem for disciplines like political science and sociology. Did you see that letter to the TLS from Krishan Kumar? Or look at the Collegiate Learning Assessment. Colleges that use that test receive thorough, sophisticated reports that explain how their students compare to similar students at other colleges. But it seems like those sophisticated reports might be a little too sophisticated for some purposes. When colleges disclose their CLA scores to the Voluntary System of Accountability, the colleges often plug in the wrong language from their CLA reports—declaring that their CLA score gains are “below expected” when they’re actually “above expected,” and vice versa. That means that prospective students who visit the Voluntary System’s Web site are getting bad information.
Hawk: But that’s all just nitpicking, isn’t it? These systems aren’t perfect, but they’ll get better over time. The basic thing that we have to do is to disrupt the “non-aggression pact” between students and faculty members. Instructors need to press students to learn, and students need to demand serious attention from their instructors. I don’t think that’s going to happen without outside pressure. You can daydream about “conversations inside our department,” but everything in my experience tells me that those conversations just aren’t going to happen on their own. If we need pressure from a dean hassling us to define our learning outcomes, then so be it. I think this is actually an exciting time to be in the academy. Every week I read about some interesting new way to assess student performance.
Skeptic: Me, too.