But they also knew that student course evaluations have significant flaws. Among them: Gender and racial biases affect the way students evaluate their instructors.
So as part of the effort, some of the Hamilton professors studied strategies for getting rid of those biases. They focused on gender bias in particular because, with fairly even numbers of men and women in their sample of professors, they could get clear evidence. The sample did not have enough faculty of color to evaluate racial bias.
Their findings were recently published in a paper, “Can you mitigate gender bias in student evaluations of teaching? Evaluating alternative methods of soliciting feedback,” in Assessment & Evaluation in Higher Education.
Prior research on whether giving students information about implicit bias can reduce it has produced mixed results. This work has also focused on bias in quantitative survey questions, but the Hamilton group wondered if they could create a completely qualitative form that would mitigate bias. And they were curious if other changes to the course-evaluation process, like delaying when students completed the surveys, might help as well. Their goal: collect evidence to show if these changes to student evaluations could put a dent in bias, as they hoped.
The Hamilton professors designed a randomized, controlled trial to test whether these strategies would reduce bias against professors who are women. Forty professors teaching 210 courses volunteered for the study. The students evaluating them were randomly assigned into three groups. Students in the control group completed a standard evaluation, with both quantitative and qualitative questions, near the end of the semester. One treatment group completed an alternative assessment at the semester’s end with open-ended, reflective questions that included an explanation of bias and how to avoid it. The second treatment group did the same alternative assessment, but not until the start of the following semester.
Students’ responses were then reviewed by external readers with expertise in the evaluation of teaching, who rated how positive, constructive, and specific students’ feedback was, and noted any mention of the professors’ identities.
There were differences among students’ evaluations based on which group they were in. The control group, which was asked more directed questions, gave the most specific feedback. The second treatment group, which evaluated their courses later, provided more positive feedback. But neither intervention significantly reduced bias against professors who are women. In each of the three scenarios, women were consistently scored lower on average than men.
“Even though it’s in a sense disappointing we didn’t find a way to reduce the bias, I think it’s a really good reminder to people that it’s really hard to get rid of,” said Ann Owen, the lead author of the paper and a professor of economics at the college, who also chaired the faculty committee.
Here’s another problem: The external readers, who were asked to look for markers of instructors’ identities, found few. So while the evaluations were biased, those biases didn’t present themselves in a way that’s obvious to a reader — even an expert one.
So where does that leave things? Student evaluations are biased, and those biases appear to be stubborn. Just knowing this, though, isn’t especially helpful to professors when they’re using evaluations for their typical purpose: determining which professors to promote, offer tenure, or, in the case of adjuncts, keep on for the next term. Did a particular professor get negative feedback because of bias, or because of ineffective teaching? Reading the evaluations can’t answer that.
All evidence is flawed, Owen points out. It helps not to rely completely on one kind. There’s a place for student perspectives in the evaluation of teaching, but their comments should be one part of the story. Other evidence, like professors’ self-reflections and peer review, should also be used. Drawing on multiple sources can illuminate whether negative student evaluations point to a problem with a professors’ pedagogy or treatment of students, or if students are perhaps penalizing a woman who teaches a big course and makes it hard to earn an A.
And that, indeed, is the direction Hamilton has decided on. A few years ago, it passed new tenure and promotion guidelines, and departments are working to revise their own accordingly. (Owen’s department, economics, has completed its revision.) The guidelines ask departments to describe in specific terms what good teaching means to them, and what evidence they will use to identify it. They also push departments to use multiple forms of evidence when possible.
Pass it on
Ian Petrie likes to ask his students about their other courses, including which is their favorite. For one thing, it’s a way to signal to students that he knows that his is not the only course they’re taking. It’s also a way for him to learn about good teaching happening at the University of Pennsylvania that could inform the work of its Center for Excellence in Teaching, Learning and Innovation, where he is the director of graduate student programming and pedagogy. And Petrie shares the positive things students say about their favorite courses with the professors teaching them (without identifying the students).
I noticed Petrie’s post about this practice on Bluesky and emailed him to learn more. I was especially drawn to the closing Petrie used in an example of the emails he writes to colleagues, which he also posted on the site. It struck me as a particularly warm — and complimentary — invitation to check out the campus teaching center: “I don’t know if you have had much or any contact with our center, CETLI, but I hope that we might see you at some of our faculty programming at some point,” he wrote. “It sounds like folks could learn a thing or two from you!”
Petrie also passes along compliments to scholars whose work his students enjoy, he said in an email. “I always tell people: send thank you notes!,” he told me. “They make people happy!”
I wonder if any of our readers have a similar practice of sharing compliments about colleagues’ teaching. If so, how do you collect and convey this feedback, and what happens when you do? Tell me about it at beckie.supiano@chronicle.com and your example may appear in a future issue of the newsletter.
What’s good about Gen Z?
Beth has reported several stories on the challenges of teaching this generation of traditional-age students. But now, she wants to hear the flip side. Many professors have told her that today’s students are empathetic, creative, insightful, and passionate — and that when you can tap into those traits in class, great things happen. She wants to know: What about your students impresses, inspires, or challenges you in your teaching? Have you found that particular topics or assignments resonate with students in ways that shape how you approach this generation in the classroom?
To share your experience, write to Beth at beth.mcmurtrie@chronicle.com or fill out this Google form.
Thanks for reading Teaching. If you have suggestions or ideas, please feel free to email us at beth.mcmurtrie@chronicle.com or beckie.supiano@chronicle.com.
-Beckie
As always, nonsubscribers who register for a free Chronicle account can read two articles a month. Your readership supports our journalism.
Learn more at the Teaching newsletter archive page.