OK, I admit it: I like assessment. I like it because it encourages faculty members to think more carefully about what they do, how they do it, and why they do it that way. I like it because it helps raise questions about how our teaching strategies affect learning outcomes. And I like it because in the process, we discover more about how our teaching fits in with programs and curricula beyond our own courses. Good-quality assessment simply asks about our goals, our instructional procedures, and the link between both of those and learning.
However, my experiences as an external reviewer, workshop leader, and member of many campus program-review and program-assessment committees have made it clear to me that most academics resist assessment in general and on principle. Some professors dislike the scrutiny. Others feel that assessment reflects corporate encroachment and a threat to academic freedom. Still others fear a homogenization of the educational experience.
True, many campus-level assessment efforts are flawed—often because they don't engage faculty members and don't carefully examine faculty work and its connection to learning. In my experience, many of the letters that administrators write to departments after a program review are brief, lack a nuanced understanding of the department under review, and inadequately address curriculum quality and coherence. Many departments receive little institutional support for surveying students, and all too often, anecdotal evidence is relied upon as fact.
So some skepticism on the part of the faculty is healthy. It can, for example, prevent institutions from moving too quickly on a new strategy. Indeed, many a professor has lived through an assessment venture—often plopped onto campus with little background work—that was initiated by an administrator who left soon thereafter. The result? Heightened faculty cynicism and reduced commitment to future assessment efforts.
But too much cynicism is unwarranted. Most administrators I have met are hungry for faculty engagement and indeed fear that too few faculty members are interested. Besides, given the often short tenure of many administrators, the faculty must play a central role in making assessment part of the campus culture. If professors lend their expertise and experience, and engage in discussions about assessment early on (both formally and informally), they can help create meaningful programs capable of getting useful results.
The fact is that there are several good reasons for assessment. The most important, of course, is that it can and should improve our teaching and students' learning. But too many assessment programs focus more on input than output, and they rely too heavily on student opinion. We need to look more carefully at what students can do using course and student portfolios.
Executed well, assessment encourages faculty members to articulate their course and assignment goals more clearly and to develop sound rubrics. That helps them think more broadly about overarching program goals, and how to measure students' success in reaching those goals. That, in turn, typically leads to greater faculty interest in how classroom activities connect with academic performance. Asking what is important leads us to ask about what works, and both contribute to good-quality assessment, better teaching, and greater learning.
Take one common preassessment scenario: Most of the students in a given department are unable to identify key program goals. For example, many sociology students I have interviewed stumble when I ask them to link three of their program goals to anything happening in the world today. And professors who teach senior-level courses are often disappointed with the inability of many of their students to make substantive and cumulative connections across their courses.
At many institutions I've visited, assessment quickly showed that program goals noted in the handbook failed to materialize in individual courses. At one college, the stated goals emphasized the skills that students would gain for dealing with real-world problems—but interviews and reviews of student papers indicated that while the students were doing volunteer work, they were failing to use their disciplinary knowledge to analyze and critique their experiences. A review of that department's internship course, too, showed a weak focus on connecting program goals to real-world experiences.
Assessment can help. It can teach faculty members to work together to teach and assess those learning goals. For example, many sociology programs stress the role of research methods across courses, but my interviews with students suggest that students generally fail to apply their knowledge of those methods in other courses. In part that happens because instructors do not reinforce such knowledge and skills. Assessing both the courses and students' knowledge will highlight such gaps and help transform their cumulative experience by encouraging instructors to improve both individual courses and the learning gained across courses.
The entire department would benefit as all courses became part of a well-thought-out whole. Professors gain classes full of prepared students, and students report their highest levels of satisfaction and learning in departments where faculty members collectively assume responsibility for the entire curriculum and its assessment. It takes a village of engaged faculty to raise successful students. That same village can provide better assessment than can one designated person, and can make better use of the results.
So what are the most essential elements of quality assessment?
One is student engagement. We usually leave students out of discussions of policies and initiatives that affect them. That's a mistake. Students can tell us how and why certain courses and programs are successful (or not), and can provide insights on how to improve their teaching and assessment. Focus groups with our students showed how the clarity of the syllabus enhanced their ratings of how well their learning was assessed.
Another is the use of effective rubrics. Rubrics help students see the organization and goals of a course more clearly, and help others assess the course and student learning more accurately. Students in program-review visits I've led have told me that they frequently hear about critical thinking, but are seldom instructed on how to do it, and even more rarely evaluated on it concretely. Students can and will provide useful, explicit feedback if we ask and then demonstrate that we use their answers to enhance their learning.
Measuring critical thinking is hard. We can't just ask students if they feel that they've learned how to think critically—almost all of them will say yes, because we constantly tell them it is important. A good model exists in an innovative program at Washington State University, which approached the problem by describing discrete elements of critical thinking that could be applied across disciplines, then gave specific examples.
Such successes can lead to conversations with colleagues both on and off campus, and help promote a collective responsibility for teaching and assessing critical thinking as well as other general-education and disciplinary goals. Over time, that can help us discover how teaching and learning strategies are connected with students' progress.
Academic responsibility must complement academic freedom. Faculty members prize their independence and autonomy, and they are quick to label any "outside" influences—especially assessment—as an infringement on their academic freedom. But that independence can sometimes be detrimental to students, because it diminishes a collective responsibility for student learning. Assessment brings into focus what students should learn in courses and programs and how successful we are as individual teachers and as faculties.
Let's not do assessment just because it is mandated. Let's not do it to make accreditation agencies happy or because everyone else is doing it. Let's do it to improve learning.