Outcomes-assessment practices in higher education are grotesque, unintentional parodies of both social science and “accountability.” No matter how much they purport to be about “standards” or student “needs,” they are in fact scams run by bloodless bureaucrats who, steeped in jargon like “mapping learning goals” and “closing the loop,” do not understand the holistic nature of a good college education. For all the highfalutin pronouncements accompanying the current May Day parade of outcomes assessment, in the end they boil down to a wholesale abandonment of the very idea of higher education. Whatever their purpose, outcomes-assessment practices force-march professors to a Maoist countryside where they are made to dig onions until they are exhausted, and then compelled to spend the rest of their waking hours confessing how much they’ve learned by digging onions. The mentor-protégé model of a college education is gone. We now confront the robot model, in which knowledge is reduced to what Nietzsche called “knowledge stones” — bits of information that administrators can count and students can digest without thinking.
While what’s currently practiced as outcomes assessment may have a place in the fields of mathematics and the hard sciences (emphasis on the word may), it’s a destructive blunderbuss when applied to the arts and humanities. In trying to weed out poor teaching and in insisting on “student centered” learning, it uproots the best teaching and flattens all of it on a Procrustean bed of lesson plans and questionnaires. To be sure, in college studio-art programs — my bailiwick — outcomes assessment inevitably will result in superficial competence, and students will be temporarily happier because art will suddenly be “clarified” for them. But the price will be terrible: lots of Thomas Kinkade and Frank Frazetta wannabes, but no aspiring Jasper Johnses or Helen Frankenthalers. (Think I’m exaggerating? At a college-art “foundations” conference I recently attended, a representative of the Educational Testing Service explained how its crew of nobodies judge, for college advanced placement, more than 10,000 high-school portfolios every year. Take it from me, the stuff that gets the top mark on ETS’s scale of 1 to 6 is invariably the most hackneyed, clichéd, and uptight copycat art in the pool. I say this, by the way, as an art professor who believes that every art student should know how to draw in proportion from direct observation and should learn linear perspective; who has her painting students read such essays as David Hume’s 1757 “Of the Standard of Taste”; and who often gives a written midterm exam in painting.)
But we in higher education -- especially those of us who teach fine arts, drama, dance, literature, history, religion, and philosophy -- have, of course, brought this plague of pedagogical bean counters upon ourselves. We’ve spent the past half-century merrily “deconstructing” our subjects and declaring that the idea of a knowable core of what we teach is null and void. Banging the drum for relativism, celebrating the allegedly porous border between truth and fiction, and declaring that the “meta-narrative” (by which we ourselves learned what we know) is oppressive to “the Other,” we’ve pretzeled ourselves into the absurd stance that there’s actually no knowledge out there that’s dependable for anything — at least none, at any rate, that can withstand our “unpacking” of it. Lacking conviction, we relieved ourselves of making judgments, heating up the grades that used to rate our students’ performances to the point where, in many colleges and universities, A’s are now handed out as if they were candy. And even as we coyly admitted that we had no wisdom to pass on to the young, we nevertheless insisted that our undergraduate students spend at least four years getting our quasi-nihilist ideas into their heads.
The real wonder, actually, isn’t that outcomes assessment has marched in formation across Edukationplatz from the No Child Left Behind Act to arrive at our halls of ivy, but rather that it’s taken so long to get here. With billions of dollars of government money being poured into higher education, why shouldn’t taxpayers — and students — balk at bankrolling what often seems like a high-end welfare program for societal malcontents who get off on lording over students with the idea that nobody can ever really know anything for sure?
Studio-art professors, I’ll admit, have been among the most culpable. With guaranteed salaries and retirement plans unavailable to working artists who struggle to make a living off their art, we’ve spent decades repeating the self-serving mantra that “art can’t really be taught” and telling our students to go home, grab a No. 2 pencil, and, using only line, bring in a couple of new pieces of ... well, whatever, next class. Then we stroke our chins and say, mysteriously, “Yeah, this one works,” or “Sorry, this one just doesn’t work for me.”
But I’m getting ahead of myself. A few years back, the chairman of my fine-arts department importuned me to be the head of our outcomes-assessment committee, and I cheerfully agreed. Liking my university, and being a team player, I dutifully set about learning exactly what outcomes assessment means. It didn’t seem like such a bad idea in principle. It seemed to mean simply that we could no longer base our teaching on the assumption that because we are active professionals in the art world, our students would automatically learn, by some sort of osmosis, to become artists themselves. Outcomes assessment meant that we would have to figure out if our students were actually learning what we assumed they were learning, or, indeed, if they were learning anything at all. And if they weren’t, we’d have to fix the problem.
I began by reading deans’ memos on outcomes assessment and a report from a faculty member who had attended a workshop sponsored by the Middle States Commission on Higher Education (the same organization that will be sending people to our campus next year to see if we’re up to snuff for reaccreditation). I met with my university’s outcomes-assessment advisers, diligently taking notes on compliance. I went to the Internet to find out what other universities — in particular, their fine-arts departments — were doing about the matter, and leafed through as many outcomes-assessment handbooks as I could lay my hands on. I was told by my administration something that was reinforced in all of the outcomes-assessment literature — that each department’s faculty at each institution has the right to establish its own goals and determine its own methods, and that this self-government is a crucial part of the process.
Over the next four years, our committee got specific about what we wanted our students to learn. The first step was trying to discern what our graduates had learned, or thought they had learned. In the art department, we devised an outcomes-assessment survey and tucked it inside a brochure we sent to alumni. We hoped the brochure would woo a few of the more successful ones into making guest appearances in some of our classes. After all, we have several alumni who are out in the world doing things that fine-arts graduates do — teaching high school, working for photo-research companies, running their own design firms, designing handbags, owning important galleries. We interpreted the success of so many of our graduates as feedback that we were doing pretty well for a small liberal-arts department. On the other hand, we agreed that the survey responses would be important, too. They would tell us about not just our most successful graduates, but also how we’d done our jobs with them. Their responses would yield suggestions on how to improve our teaching.
From three mailings over three years — with about a 10-percent response rate each time (which is actually pretty good for this sort of thing) — we learned that although the faculty itself was considered terrific, departmental improvements were in order. Graduates complained that they’d had too few opportunities to exhibit their work on the campus; that outside professionals, instead of more-tolerant faculty members, should jury the annual student show; that as students they should have had rehearsals for the kinds of presentations they’d have to make in the art world; and that they should have been pushed to be more precise in classroom critiques.
On the basis of the alumni feedback (made clearest in the part of the questionnaire where alumni could simply write a sentence or two), we made the changes that were possible. We couldn’t come up with a brand-new art building, but we could carve out a space for a student-run art gallery, as well as establish other venues on the campus for students to show their work. We brought in outside jurors who were professionals in the art world. And we pushed students to use more-precise language in their art critiques.
At the end of each academic year in the cycle, I turned in a three- or four-page report to the dean. We had made a not-bad, honest effort, I thought, which had yielded some gradual, tangible results.
Wrong. It turned out — when outside outcomes-assessment dicta started coming down (I think they originate on Mars) — that our committee had been spinning its wheels in the muck of “teacher centered” teaching instead of gaining traction on the clean, hard road of “student centered” learning. That is, we were cutting to the chase too fast. The outcomes-assessment operatives from within our university (whom I don’t blame for any of this, by the way; they want accreditation and are operating under mandates coming from outside) said we’d have to formulate a more orthodox approach than some plainspoken questions to former students by which to measure the results of our teaching.
First we’d need a mission statement, one of those noble declarations borrowed by the education establishment from corporate culture, which borrowed it from the military. Since mission statements are now ubiquitous (even the pornographic Screw magazine has one), this was easy.
Not so fast. We then needed to specify, under the mission statement, our “learning goals,” to which our “learning objectives” would be directed. Then, of course, we needed to “map our courses” to the learning goals via a chart, and finally, in order to be student centered, we needed to set up grading rubrics (charts again) whereby our students would be able to grasp — with certitude — exactly what was expected of them in order to achieve a particular grade. Which is to say that in our first efforts we had failed to atomize painting, drawing, sculpture, ceramics, photography, and design into discrete bits of empirical knowledge whose acquisition by students we could plot on a spreadsheet.
We were supposed to lay art out on a dissecting table as if it were a dead cat. We were supposed to remove the separate organs — say, tonal range, positive shapes and negative shapes, linear perspective, and spatial depth — poke them and measure them, and then stick them back into the cat and have it spring back to life. Although we weren’t actually reprimanded for not filing high-school-like daily-lesson plans complete with the teacher’s summary “closure” and the indication of a ringing bell, delivered at 10 minutes to 3 o’clock, we felt the pressure to become more like high-school teachers. The camel’s nose had clearly entered the tent.
Worse, we’d used plain English in narrative form, instead of the educationese so much more easily convertible into chart headings. We’d failed to construct the long, unreadably narrow columns adored by outcomes-assessment “experts” — is there a Ph.D. in the subject? If not, there is bound to be one soon — in which we were supposed to pile abstraction (e.g., “critical thinking”) upon abstraction (e.g., “problem solving”) in order to generate the kind of gaseous sentences so endemic to outcomes assessment. Here are two, randomly pulled off the Internet from a menu of thousands: “The goal of the learning assessment course is to enable students to make reliable and accurate assessments of learning.” (Kids, can you spell t-a-u-t-o-l-o-g-y?) And, in a sad attempt to tie self-improvement therapy to art theory and technique: “Critical thinking, imaginative problem solving, and self-reflective evaluation are key components in the development of the theoretical and technical aspects of art making.” (What does “self-reflective evaluation” mean?) Worst sin of all, we hadn’t realized that, for all the chatty assurances that each department was free to devise its own outcomes assessment, all of those columns, abstractions, and the rest were absolutely requisite. The faculty must, in short, be broken.
From conversations and correspondence with friends and peers, I’ve concluded that college faculties in the arts and humanities have fallen — mostly from exhaustion — into two camps vis-à-vis outcomes assessment. On one side sit the resigned realists, who know that outcomes assessment is bureaucratic baloney but who recognize that resistance is futile. If getting the educrats off their backs as quickly as possible so that they can return to real teaching means concocting “learning objectives” in, say, 18th-century English literature, what the hell, they’ll do it. For the record, this is where I’ve laid my sleeping bag.
The second camp, though, is downright scary. It consists of those faculty members who believe that outcomes assessment has something to offer. They tell you that the process has clarified things for them. These second-rate teachers, who never really got art in the first place, suddenly find themselves with a ready-made rubric to guide them untroubled through their classes: Think up some lifeless “learning objectives,” cobble together algorithmic lesson plans, give pat little presentations, then proctor the students while they connect the dots and fill in the blanks, and tote up the scores (“interpret the findings”). This brings them to the end of the process, where, in a kowtow to outcomes parlance, it’s time to “close the loop” and make changes so that students will be able to learn better. The cycle begins all over again at this point, in an endless loop designed, of course, to prevent faculty complacency.
In studio-art classes, that means comforting poor and mediocre students at the price of shackling the better ones. Like that of many professors, my education came out of the old pedagogical approach based on the assumption that teaching to the best lifts everyone in the class, and I have modeled my pedagogy on that principle. I have no proof, but I predict that the OA-convert teachers, in catering to the average, will edge upward in popularity with students. Outcomes assessment imitates the documentary that’s coming soon to a classroom near you — Revenge of the Mediocre.
Nevertheless, many college administrators are probably relieved. Outcomes assessment promises to spare them the chore of trying to determine — without benefit of spreadsheets, charts, learning objectives, scales of 1 to 6, and the rest — who on their faculties is any good at teaching and who is not. Now all they need do is to measure the lengths and diameters of the smooth but nutritionless cocktail sausages as they drop from the outcomes-assessment grinder.
As I think back over my own education, it’s clear to me that some of my best professors taught in indirect, even enigmatic ways, and that their ideas and influences took years to really settle in. There was Peter Viereck, at Mount Holyoke, with whom I took two semesters of Russian history. Like Thucydides, he taught history through stories and anecdotes, tucking the necessary facts into narrative crannies. Later, in graduate school at the Art Institute of Chicago, I encountered Ray Yoshida; he was as laconic as a rock, but students valued like jewels the few piercingly perceptive words he’d say about their work. Both of those professors understood the greatest secret of teaching, the one that no one, anywhere, anytime, can ever chart: Only when there exists a mutual need of the student for the teacher and the teacher for the student can any teaching or learning take place. Yet neither of those men would rank very high by orthodox outcomes-assessment reckonings. They would probably have choked to death on a phrase like “learning objectives.”
Laurie Fendrich is a professor of fine arts at Hofstra University.
http://chronicle.com Section: The Chronicle Review Volume 53, Issue 40, Page B6