For the last couple of weeks, we’ve asked you to share any changes your college has made in its course evaluations and any patterns you’ve noticed in your students’ comments. The responses we’ve gotten so far — thanks to everyone who has written in — haven’t suggested any trends.
Some colleges asked different questions; others kept theirs the same. Some colleges didn’t use evaluations in instructors’ reviews, or made using the surveys optional; others made no change. Professors reported that students had been frustrated with remote instruction; some said students had expressed gratitude for their efforts. It’s not too late to share your experience, by the way: the form is here.
While the responses we’ve gathered haven’t revealed clear patterns, they do offer food for thought. I was curious to hear more about the changes made at the University of Arizona, which Lisa Elfring, its associate vice provost for instruction and assessment, shared.
Arizona didn’t collect course evaluations at all in the spring of 2020. Their use was optional for the fall, and the university added a new question: “Given the uncertainties of the pandemic situation, what factors supported or hindered your learning in this course?” (The university is still analyzing students’ responses.) University guidelines indicated that if evaluations were available, they were to be used only for formative feedback.
In an interview, Elfring explained that those adjustments built on other changes Arizona had made in its course evaluations. In an effort to limit the intrusion of student bias, the university had stopped asking students about their instructors and starting asking them about their experiences. To forestall statistically problematic comparisons, it had stopped summarizing evaluations with numerical scores. Arizona encourages professors to collect formative, midsemester feedback from students. The university is also moving to incorporate other kinds of teaching evaluation, though that has been complicated by the pandemic.
Those changes, which took effect in the fall of 2019, are in line with steps some other colleges have taken to mitigate some of the well-established problems with traditional course evaluations. Those problems are clear, and serious. Course evaluations can capture and codify students’ biases against people of color, women, non-native English speakers, and other marginalized groups. Traditional evaluations don’t even measure particularly well what they claim to measure. The changes Arizona had already made can be difficult to pull off, but given the evidence, they seem a little obvious.
So I was struck in my conversation with Elfring by how nuanced the decisions around using course evaluations during the pandemic can be.
Arizona’s decision not to collect course evaluations last spring, Elfring told me, had been meant to help professors, preventing the limitations of pandemic teaching from being held against them. But it didn’t always play out as planned.
“What we had heard,” Elfring said, “was that people who maybe were the people we were trying to protect actually had been hurt by this.” Without evaluations, they didn’t have evidence of the hard work they had done to support students.
Perhaps that helps explain why colleges have gone in such different directions with course evaluations. This might be a case where even if the goal is clear, figuring out how to get there is not.
As I mentioned, we’re still interested in hearing about your experience with course evaluations in the pandemic. And we’ll be curious to see whether the pandemic accelerates changes in evaluations like the ones Arizona had already made, too. If you have thoughts on that, drop me a line, at beckie.supiano@chronicle.com.
Further Thoughts on Student Critiques
Recently, I shared the story of a professor who had hired a student to critique his teaching. Several readers wrote in to share similar experiences, and a couple pointed to the work of Alison Cook-Sather, who directs the Teaching and Learning Institute at Bryn Mawr and Haverford Colleges.
The institute runs a formal, semester-long partnership program in which instructors team up with an undergraduate not currently taking their course. You can read more about the SaLT program (for Students as Learners and Teachers), which others have used as a model, here. You may also be interested in this collection of reflections on how partnerships at a handful of colleges worked during the transition to remote teaching, published in Teaching and Learning Together in Higher Education.
ICYMI
- First-generation students participate in high-impact practices (like service learning or a culminating senior experience) at lower rates than do their classmates. Students’ experience of high-impact practices also varies by major, according to a new report from the National Survey of Student Engagement.
- The pandemic is awful. But will working within the limits of Covid-19 precautions make professors better teachers? Rob Jenkins, for one, has changed for the better, he writes in a recent Chronicle essay.
- For American colleges, the one-year anniversary of the pandemic is just around the corner. Josh Eyler shares some observations on what that marker means in a recent Twitter thread.
Thanks for reading Teaching. If you have suggestions or ideas, please feel free to email us, at beckie.supiano@chronicle.com or beth.mcmurtrie@chronicle.com.
—Beckie
Learn more about our Teaching newsletter, including how to contact us, at the Teaching newsletter archive page.