Happy Thursday, and welcome to Teaching, a newish newsletter from The Chronicle of Higher Education. I’m Beckie Supiano, one of the Chronicle reporters covering teaching and learning.
This week we’re going to devote much of the newsletter to something most professors love to hate: course evaluations. We’ll start with a conversation between me and my editor, Dan Berrett, on the topic before sharing a few resources you may want to check out. Then we’ll unpack what a big new report on the future of undergraduate education has to say about improving the quality of teaching. Let’s get started.
The Student Voice Matters
Before he was my editor, Dan covered the issues I’m now writing about. Fortunately for me, he saves old emails. One of the things he passed along when I took over the beat was a back-and-forth he’d had with the director of a program at the University of California at Merced in which students conduct teaching assessments. At the time Dan was working on a story about efforts to observe what happens in the classroom, and the Merced example didn’t quite fit.
But it was still in the back of his mind as we talked about stories I might pursue, and I went to Merced this fall to shadow the students as they observed a class, interviewed their peers in another, and gave the professors feedback. My story about it is out now, and you can read it here.
Dan and I recently chatted about why we thought this was a story worth telling — and what makes student feedback so fraught for faculty members.
BS: What made the Merced effort so memorable for you?
DB: It was mostly that the folks at Merced seemed to take student input seriously and to solicit it in a thoughtful way. In contrast, when most professors hear the words “student feedback about teaching,” they tend to get really uncomfortable. But that’s because they often think the words refer to student course evaluations.
BS: Why are so many professors wary of course evaluations?
DB: Most people who study evaluations say what they really measure is student satisfaction. They’re also prone to bias. Even supposedly empirical questions that students are asked, like how promptly their professor returns work, can produce results that are askew. And then students are often asked about things they don’t really know — like whether the professor knows his or her subject. If you take those issues, and then add the fact that these surveys are often used in tenure and promotion decisions and as proxies for really evaluating teaching and learning, you have a recipe for resistance. At the same time, I also remembered something I once heard from Ken Ryalls, who directs IDEA, a nonprofit that has developed a course-evaluation tool and conducts research on the results. He said that what drove him crazy was “this notion that students don’t know what the hell they’re talking about.” But students spend a lot of time watching faculty members teach — more time than anyone else. I thought: He has a point.
So I have a question for you. What did the faculty members at Merced seem to think of the students who observed their courses?
BS: Working with the program is totally voluntary, so the professors who do really respect the students’ perspectives and are eager to see the classroom from their vantage point. The students in the program serve an intermediary role. They are trained, and they’re sharing descriptive feedback. But they’re also students, so they can interpret as well as convey what their peers shared.
DB: I think that’s something you capture really well in the story — particularly the point at the end, where the professor is debriefing with the students. He and they have a concern about students’ freeloading during group work. He thinks he’s engineered a solution by assigning clicker questions. But there’s this nice moment where one of the student observers breaks it to him that the way he sets up his clicker questions doesn’t actually solve the problem. This seemed like such a great example of what you’re talking about: A professor might think he or she is doing something very specific and intentional, but someone who’s watching it unfold and knows how a student thinks can see that it’s not really having that effect. I’m thinking about the fact that participation in this program is voluntary. Was there any common denominator you perceived in the two professors you followed, Marcos Garcia-Ojeda and Noemi Petra?
BS: They shared a desire for excellence. Petra made a comment during her debriefing that she had her dream job. So her motivation to improve — and she was clearly working very hard to — came from that. The example you mentioned above, about the specific way students in Garcia-Ojeda’s class answer clicker questions, is such a small thing. But if you want to be great at something, that kind of detail is important to you. I think that can make someone open to hearing feedback.
DB: That’s a great point. And I think that this desire for excellence probably needs to be paired with other things, like a combination of humility, skill, and perhaps above all, a willingness to be vulnerable.
BS: Most professors can probably call to mind an obnoxious comment from a past course evaluation. But have you ever gotten feedback from students that helped you become a better teacher? Tell me about it, at beckie.supiano@chronicle.com, and your story may appear in a future newsletter.