A recent story on the problem of how teaching is evaluated struck a chord with many in higher education. As our reporting showed, student course evaluations — which are known to be flawed measures — still carry a lot of weight on most campuses during annual reviews. Some colleges add in peer observation, especially when a faculty member is up for promotion and tenure, but even that can be problematic.
We asked readers about their own experiences with the teaching-evaluation process. Nearly 300 of you filled out our questionnaire. What follows is a selection of those responses, with some brief analyses. Answers have been edited for length and clarity. In order to allow for candid responses, we have kept all participants anonymous.
Has the evaluation process on your campus affected how you teach or how much time you spend on improving your teaching?
Many of the responses were negative, suggesting that student course evaluations, in particular, led professors to grade more easily, challenge their students less, and pay more attention to being likable.
“Student course evaluations are the only measure by which my year-to-year contract is determined, so I must maintain good evaluations to keep my job. Luckily, I do receive great evaluations, but what I know from experience is that evaluations are driven by how students feel, rather than what they think, so I have to prioritize that in ways that require enormous emotional labor and time. Emails must be replied to at all hours and every day of the week, constant extensions must be granted and extra help provided, even on general things like writing skills, and most importantly, I have to spend more time than ever in my career acting as emotional support for students who come to me to talk for long periods of time about personal issues and mental-health crises.”
“It has generated a culture of fear and of pandering to students’ whims. Also, students seem to write evaluations so that other students can read them and know if a class is easy or hard.”
“Because my university and department consider teaching evaluations to be the best word on the quality of instruction, I’ve changed. I no longer correct students’ writing (they don’t like it). I don’t discuss alternative values and arguments for positions I know my students hold. And I give students high grades for any show of effort. Finally, I take every opportunity to tell my students I care about them. I do care — just not in the way they believe is required. My evaluations couldn’t be higher!”
“Because the ratings ask questions about me and not about what students learn, I find that I have to appease students and be much more ‘gentle’ — when I don’t adhere to normative southern gendered behaviors or even manner of dress I am called ‘harsh’ or ‘mean’ by students. Beyond this I also do ‘police’ myself in my interactions with fully online students — I spend a lot of time adding niceties to my emails so they do not accuse me of being mean or something similar. So this isn’t impacting teaching, but it is governing communication and it feels very yucky.”
Instructors who found student feedback useful said they focused on constructive comments or solicited midsemester evaluations that they had designed to elicit helpful feedback.
“I have never gained any valuable feedback from students’ evaluations administered by the college. They either love me or hate me. Instead, the valuable feedback I received is when I distribute my own questions: Which assignments were most/least helpful? Did they learn what they needed for their career or area of study? What would they add/subtract if they were teaching the course?”
“I don’t focus so much on the numeric data as the qualitative comments left by students — Do they consistently highlight something about the class that was confusing for them? Something they think would have been helpful? As long as student evaluations are framed not as a way to punish faculty but to give faculty agency and the ability to provide evidence of pedagogical improvement, I think they can be very important.”
Do you feel you are evaluated accurately and fairly?
If the main metric was course evaluations, the answer to this question was typically “no.” If a holistic review was used, answers varied between “sort of” and “yes.”
“Student evaluations are accurate measures only of student biases and preferences. They are not structured to offer impartial evaluations. Many, many studies have shown this. So no, as a Black woman professor teaching racial and sexual topics in the humanities at a primarily white institution, I am not fairly evaluated.”
“No. My unit’s assessment of my teaching is largely based on popular opinion and the number of student complaints (without regard to whether these complaints have any substance). I am observed, at most, one time each academic year — usually at the last minute before reports are due. These are more about the quality of my presentation than the effectiveness of my teaching.”
“Yes and no. If evaluation was by student input only, then I would not feel fairly evaluated. However, we have a multi-pronged approach to teaching evaluations including direct observation by a colleague and the department chair.”
“In evaluating faculty for tenure and promotion, classroom evaluations are simply one element of the evaluative process. Additionally, the evaluations are carefully interpreted and cautiously used. We take a holistic approach to evaluating faculty, which means a set of ‘bad’ evaluations cannot (by itself) cause a faculty member to be denied tenure or promotion. I believe the key to ‘fixing’ the evaluative process is not educating the participants (i.e., the students). The key is to educate those who are interpreting and using the results.”
If you’re an administrator, what challenges do you face in evaluating instructors’ teaching, given the systems your campus asks you to use?
Administrators said student course evaluations could be helpful, depending on how they were used.
“I never want to use them as the only basis for assessing teaching performance. But, when I see trends in student comments, across time and across courses, about an instructor, it is useful information. Student evaluations can also be one of the few mechanisms for instructor accountability if there has been some type of classroom misconduct by an instructor.”
“I think evaluations need to be interpreted broadly. There’s no difference in my mind between a 4.1 and a 4.7. Both are great marks and they are indicators that I don’t need to worry about that faculty member. When I see a 3.0, on the other hand, I think concern is justified.”
“Evaluating instructors’ teaching feels like conducting an orchestra with one hand tied behind my back. Standardized surveys capture some student sentiment, but miss nuances like instructor rapport or ability to adapt to diverse learning styles. Meanwhile, peer review requires significant faculty time and expertise, and its subjective nature can raise concerns about fairness and consistency. Striking the right balance between these imperfect tools, while navigating political sensitivities and limited resources, feels like an ongoing tightrope walk, leaving me constantly questioning if I’m truly supporting effective teaching and fostering student success.”
Has your campus changed any pieces of the process, such as rewriting the student course evaluations, changing how much weight they’re given, moving away from numerical rankings, or improving training for peer evaluations? If so, what has been the effect?
Most respondents said their campus had not made many changes. Among those that had, the effects have been limited. Among the most disliked changes has been the move to online course evaluations, which have led to a precipitous drop in the share of students who fill them out.
“The big change in recent years is that the schools do online evals now and the response has gone from about 99 percent to 5-15 percent. Instead of getting a well-rounded sample, I just hear from people who hated or loved the course. Or, I hear from almost no one and get nothing useful. It’s scary to know these numbers are such a big part of my reviews, promotion and tenure, etc.”
“We’ve discussed this, and there was an attempt at rewriting the peer-observation form, but they took a simple, one-page form and turned it into a lengthy, multipage essay. No one has the time or inclination to use the redesigned form, and it died a quiet death.”
“My institution changed the name of the evaluation from Student Evaluations to Student Perceptions. Some of the wording of a few questions has been changed, but otherwise the process is the same. These worthless forms (as they are currently employed) are still a major part of a faculty member’s evaluation at our school.”
If nothing has been done, why do you think that is?
“Most senior faculty were hired and achieved tenure in a time when evaluations like these either didn’t exist, or were taken very lightly. Consequently, I don’t think they understand how biased and unfair evaluations can be, particularly to faculty from the margins, or early-career faculty charged with teaching in disciplines that have been targeted and politicized by those outside of academia.”
“Because it takes a lot of work, and that work isn’t something you can fund-raise for, so there’s little desire from administration to work on this type of change. It’s not a flashy new program you can point to when you ask donors for money. Also, there is a lot of this kind of thinking: ‘Well, we know that the student evaluations of teaching are biased, but we take that into account when making decisions.’ So I suspect that a number of individuals believe that we don’t need to fix the underlying problem because they can put a Band-Aid on it.”
“Because teaching really isn’t a high priority (1), and (2) there’s a real lack of understanding what effective teaching is and how to evaluate it. A much larger conversation needs to be happening. Right now this is focusing on a symptom as though it’s the actual illness.”