As a newly recruited department chair, you have barely moved into your office when several insiders arrive to inform you of urgent personnel problems:
- A certain staff member has been failing at his job for years. He is habitually late, and his work — when he manages to complete it — is full of errors.
- An assistant professor is not going to get tenure. Her teaching is consistently rated as poor, her research record is lackluster, and her service is mediocre.
- The director of the department’s graduate program “retired on the job” years ago. All the metrics of the grad program are getting worse, and morale among the students is low.
Your informants look at you expectantly. What are you going to do about this “problem” person?
First, of course, you must substantiate the allegations. So you turn to personnel files containing past performance evaluations and discover … nothing. Or at least nothing that is “actionable.” Negative comments about the person’s work were few or muted. Basically, people wrung their hands and complained to each other, but no one in authority (like the previous chair or senior professors on the promotion-and-tenure committee) wrote up or documented anything of consequence.
In short, they neglected to evaluate this person’s job performance in any meaningful sense of the word. This month, the Admin 101 series — on how to become an academic administrator and succeed on the job — tackles one of the most challenging, miserable, and politically dangerous aspects of campus leadership: evaluating people.
No task for the chair, dean, provost, or beyond is as fraught with reluctance, angst, blowback, incompetence, and yes, cowardice, as is writing performance evaluations. Nevertheless, thorough evaluations are vital both to the success of your unit and to the faculty or staff member in question.
Overcoming evaluation avoidance. No decent person enjoys delivering bad news. But it’s equally dysfunctional to avoid telling people the truth about their job performance out of fear of angering them or hurting their feelings. It’s magical thinking to believe badly behaving or poorly performing people will spontaneously become skilled professionals and great colleagues if you tiptoe around their misdeeds or failings.
Many discussions with managers in the private sector have led me to believe that the reluctance to conduct a true evaluation of someone’s performance is universal. Academe, however, has certain characteristics that tend to increase the fear, cursoriness, and avoidance factors:
- A tradition of collegiality. Academics generally want to be collegial within “our departmental family.” We have a difficult time expressing negative opinions, especially in print, about a person who sits a door or two down the hallway. To paraphrase a line on parenting from the Roseanne television series, many administrators would rather be a popular leader than a good one.
- A tradition of shifting leadership. Power roles in academe are variable and quixotic. Many department chairs, for example, are in place on a rotating or elected basis, so the colleague for whom you write a critical evaluation may very well be your chair the next year, writing an evaluation of you. Many administrators may rightly wonder, “What’s the point of giving real feedback when it may come back to haunt you?,” so they say nothing and do nothing.
Other hindrances, real and perceived, abound. Nevertheless, you cannot insist that students, parents, alumni, presidents, and trustees — let alone the general public, the media, and the legislature — “respect” and “value” higher education if you do not respect it enough to evaluate people’s work performance credibly.
So what does a good evaluation look like? I think it has four connected pillars:
Good evaluations are fair. Are you being even-handed in the way you evaluate people on a given dimension of their work? Let’s take teaching. Over the course of my career, I have seen dozens of different formats for assessing teaching on annual faculty evaluations. Each format had its benefits and disadvantages. The two key mistakes that many administrators make, whichever system they choose, is relying on it exclusively and/or applying it unevenly.
In a previous essay I outlined other ways to assess teaching besides the usual end-of-semester student evaluations. Student ratings of teaching do have some utility, especially when judged year after year, class after class, and compared with the scores of other faculty members in similar courses. But those ratings should not be your sole metric in assessing a faculty member’s performance in the classroom.
Likewise, student comments on course evaluations can run from the offensive to the revealing, but they, too, must be read impartially and consistently. If, for instance, students comment that a certain professor is “always late for class,” you must weigh that concern in your evaluation as much as you would if the same comments arose for another professor.
The point is: Put in a good-faith effort to ensure that the same expectations are applied to everyone. You may make mistakes, but part of your responsibility is to try to be roughly uniform.
Good evaluations are accurate. How does your department, institution, and field measure “research productivity”? Good evaluations result when you and everyone else in the unit have an up-to-date, shared, and accurate understanding of those norms.
When it comes to counting research publications, for example, I have seen all sorts of formulae, varying widely by field and institution. Whichever method you use to “count pubs,” just make sure it means what you think it does.
That wasn’t the case for a new chair in a social-sciences field, who found his department’s promotion-and-tenure system to be confusing — for the candidates as well as the voting tenured professors. Tenure files listed multi-authored publications, for instance, but never noted how much each author had actually contributed to the work. The chair worked with department members to revise the notation so that it included the percentage contribution of each of a publication’s authors. Everyone felt the new system was fair because it more accurately reflected what people had really accomplished.
Good evaluations have context. You are inviting subjective treatment of an evaluation when you inadequately explain a faculty member’s record — especially to people outside the field, such as administrators up the approval chain or the members of a campuswide tenure committee.
I know an associate dean at an arts college who carefully annotates the CVs of tenure candidates, describing in detail the level of achievement of each accomplishment. Why? Because most academics outside the candidate’s field will be unfamiliar with how creative work — like, say, a sculpture — is valued. Rather than simply listing a particular prize won by an assistant professor, the associate dean includes a description of how important and rare that prize is.
Context also matters even when everyone already knows what you are talking about. Take student evaluations of teaching and the resulting scores. Objectively, if an instructor keeps racking up ones on a five-point scale where five means “excellent,” there are problems. Yet even so, many “low” scores must be evaluated in comparison with other sets of scores laterally, such as:
- The median scores for teaching across the department.
- The median scores in similar classes.
- The median scores in the same class, taught by other instructors.
- Longitudinally, you should also consider the prior performance of the instructor over time.
In this series, David D. Perlmutter writes about pursuing a career in academic administration and about surviving and thriving as a leader.
Further, there are qualitative aspects of context. When you noted the faculty member’s shortcomings in the classroom, did the instructor seem to take the concerns seriously or reject them? Did she or he seek assistance from respected pedagogues on the faculty? Or volunteer to attend training at the campus teaching center? If two assistant professors were up for tenure, and one struggled in the classroom at first but sought out help and improved over time, while the other floundered yet radiated indifference to improvement, who wouldn’t argue that the former candidate for tenure should be viewed more favorably than the latter? The wider picture matters.
Good evaluations are specific. Another serious flaw in most evaluation systems is the lack of specific instructions for improvement. Informing a young scholar who is behind the curve on research publication that “you should publish more” is not helpful. How much more? Where? And by when? People need reasonable answers to those questions. Better yet, identify mentors, workshops, anything that can aid improvement.
Likewise, on a staff member’s evaluation, it does no good to declare “you need to work harder.” If the staff person is expected to process certain forms, then what constitutes “good productivity”? You need not be overly exact (“your job is to process 10 of these forms a day”), but a range would help. You cannot expect people to “get better” at some aspect of their job if you haven’t defined for them what “better” is.
At the same time, try to avoid false promises in your prescriptive expectations — like telling an assistant professor who has been under-publishing for six years that a sudden burst of activity will be enough to save the young scholar’s tenure bid. Sounder advice to give that tenure tracker: Find an institution and a position that is a better fit. Such counsel may not be received with gratitude — it certainly never has when I gave it — but it may be what is truly best for the person and his or her career. Bestowing false hope is a not a kindness.
Your advice can also be practical and helpful. If, for instance, a staff member needs to learn new software to be more efficient, identify the right training workshops and provide the time to learn it. Or if, post-tenure, a professor is not producing much research anymore, maybe it’s time to have a conversation about increasing that faculty member’s teaching load. Render aid and find options; don’t just say “do it.”
Finally, as in every other aspect of leadership, tone, sincerity, and attitude matter. If people think you are out to get them, they will shut down and become defensive. But if they believe your goal is to help, they are far more likely to pay attention and take action. And there is genuine mutual pleasure in seeing someone you’ve guided thrive after struggling on the job.
I think my own research, teaching, service, and, yes, administrative work has progressed over the decades because well-meaning, experienced, and astute people pointed out where, how, and why I could do better. I am grateful not only for what they told me, but also for the manner in which they broke the news. As an academic leader, helping other people advance is a key part of your job — but you can’t assist anyone if you don’t evaluate them fairly, accurately, contextually, and clearly.
David D. Perlmutter is a professor in and dean of the College of Media & Communication at Texas Tech University. He writes the “Career Confidential” advice column for The Chronicle. His book on promotion and tenure was published by Harvard University Press in 2010.