One of academe’s most glaring blind spots lies in how, and when, we evaluate our leaders. A recurrent problem, it’s even more acute now that higher education is constantly living under a microscope.
We tend to leave gaping windows of time between evaluations, under the mistaken notion that campus leaders need breathing room to adjust in the job. But we have seen this dance before: Leaders get left in their role too long, without the recurrence of oversight, and the institution discovers the gaps of ability or integrity only after it is too late.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
One of academe’s most glaring blind spots lies in how, and when, we evaluate our leaders. A recurrent problem, it’s even more acute now that higher education is constantly living under a microscope.
We tend to leave gaping windows of time between evaluations, under the mistaken notion that campus leaders need breathing room to adjust in the job. But we have seen this dance before: Leaders get left in their role too long, without the recurrence of oversight, and the institution discovers the gaps of ability or integrity only after it is too late.
Of course plenty of campus administrators are effective leaders, fully capable of performing well with autonomy. Yet even for them, the absence of regular feedback means we aren’t reinforcing their strengths or remedying their weaknesses. And leaving the poor performers unattended means we wind up with what is technically known as “a hot mess.”
ADVERTISEMENT
So how should we conduct executive evaluations, and deploy them with better results? Before tackling that, let’s explore the pros and cons of academe’s traditional methods of evaluating senior leaders.
The committee. This is the in-house approach. A group of internal players — typically a mix of administrators and faculty members — is charged with gathering data and providing a report on a president or a provost. The data-gathering process may include a homemade written evaluation tool (sometimes online, sometimes using good old-fashioned paper and pencil), as well as comments gathered via interviews and focus groups. The committee is charged with creating a final report that outlines the executive’s strengths and weaknesses, and sometimes includes a recommendation on contract renewal.
Pros: It’s cheap. Internal committee members know the context and culture of the institution far better than an external evaluator.
Cons: You risk getting what you pay for. Confirmation bias can influence what data the committee chooses to magnify versus what it neglects or minimizes. Likewise, internally designed evaluation surveys can be too insular and narrow in scope.
The external review. Because the external-review process works effectively in appraising departments, programs, and curricula, some institutions use the same method to evaluate an executive. They hire an external consulting firm to conduct the review and deliver a report of its findings.
Pros: It ensures a cleaner, less biased, outsider perspective. It’s usually more efficient than an internal process.
Cons: It can be very costly. Evaluators also may overlook the significance of cryptic comments or reluctantly shared bits of information. Outside firms often miss the subtleties of internal culture.
The 360-degree evaluation. A staple in other professional sectors, this tool has been steadily making its way into higher education. A 360-degree instrument gathers validated data anonymously on an executive’s performance and behaviors. Evaluators typically fill out a questionnaire that may include scaled questions for quantitative data and open-ended ones for qualitative written comments. The executive under review fills out the questionnaire as well. Scores are compiled to protect anonymity, and comments are unlabeled, thus freeing raters to be completely honest. The result is a robust report comparing the executive’s own perceptions against the outer world’s assessment.
Pros:It’s more affordable than an external review. 360-degree instruments protect anonymity and allow for more honest feedback. A validated tool ensures the questions and methodology are sound.
Cons: No oral interviews are involved. These instruments can be heavy on business-sector jargon. Since the respondents are anonymous, their views all receive equal weight — whether their knowledge of the executive’s work is intimate or distant.
The hybrid. By combining elements of various evaluation models, institutions are better able to maximize both the process and the quality of the results. A common hybrid supplements a 360-degree instrument with targeted oral interviews conducted by either an internal committee or an outside firm. Hybrids allow for variety and customization based on the unique campus, culture, or situation.
ADVERTISEMENT
Pros: A mix of methods brings in a synthesis of information. It can provide robust yet organized and focused data.
Cons: A hybrid approach that is poorly planned or carried out can produce convoluted results. The effort requires very clear expectations of ownership, process, and deadlines.
How to make evaluations work. The absence of regular evaluation isn’t just a problem at the top. A great failure across the academic system is the inconsistency of managers providing people with regular, specific, and quality feedback.
Department heads, for example, desperately need regular feedback, but seldom get it. Usually they are left to their own volition, struggling to figure out how to run their department, lead peers, supervise staff, raise money, recruit faculty members (if so lucky), and navigate the dean’s office. How do you do that without some kind of feedback to know if you are on the right track? Instead, academic culture seems to presume that because chairs are certified smart, thanks to their Ph.D.s, they can figure it all out.
Likewise, at the executive level, senior leaders tend to get feedback only sparingly between formal reviews, or if things suddenly start going spectacularly wrong. Those voids are not necessarily anyone’s “fault.” Presidents and provosts are exceedingly busy (imagine having more requests for standing monthly meetings then there are hours in a month), and governing boards are not around to observe day-to-day performance.
The system is simply not set up to provide leaders with the regular, routine feedback they need. Here are some concrete ways to fix the executive-evaluation process:
Adopt this philosophy: “Nothing in an evaluation should be a surprise.” If feedback is given at regular intervals — at least quarterly — a formal evaluation becomes merely a summation of what the leader has been hearing all along.
Make it a responsibility of the governing board to provide presidents and provosts with official feedback twice annually. (And wouldn’t it be helpful if the institution could provide feedback to its governing board, as well?)
Include a second, less-robust interim evaluation in the formal process, giving executives the opportunity to know in greater detail how they are performing.
For both the interim and formal evaluations, consider using a 360-degree instrument. Rather than just adapting a corporate tool, do some research and try to find a valid one that has been designed specifically for higher education’s distinctive context and needs.
Regularly ask critical stakeholders for their views on the leader’s performance, and share their remarks with the executive right away. Don’t wait to do that until the formal-review process. These people are experiencing the executive’s behaviors on a routine basis, and have firsthand knowledge to share. When it’s captured and shared in real time, it is much more effective than if the executive hears about situations months or even years after they’ve occurred.
Create a feedback culture across the campus. A commitment to more frequent performance evaluation can start at the top and filter downward. Make it a job requirement for presidents and provosts (i.e. spelled out in the job description) to provide regular feedback (formally or informally) to their leadership teams throughout the year. Hold deans and other such leaders equally accountable to provide their teams, department heads, and directors with regular feedback.
ADVERTISEMENT
To avoid surprises, performance evaluation needs to occur frequently at all levels. That’s the only way to reinforce what is working well (and ensure it continues) and identify anything that’s not (and create accountability measures and expectations for fixing it).
At the same time, teach executives and managers the art and science of providing high-quality, effective feedback. For example, leaders must understand how each person they supervise best responds to praise and/or criticism. Some people respond better when praise is delivered privately, others publicly. Some need a gentle landing and others a knock to the proverbial head. Delivering feedback in a way that is actually helpful is a skill that doesn’t often get discussed, and academic leaders are rarely trained in how to do it. Yet it is an essential component of a healthy, transparent, and high-integrity culture.
A note about courage. Routine feedback, supported with more robust review cycles, can both head off disasters and reinforce effective leadership. That all sounds good, but it only works when an institution chooses to hold itself accountable — to listen and act when challenges and negative feedback come along, and to make trust and integrity nonnegotiable operating principles.
I read a lot of books about leadership and management. Most are either impractical or sales grabs for the masses. Once in a while, however, one will change the way I understand leadership and organizational effectiveness. A recent read that made me sit up straight is Awakening Compassion at Work: The Quiet Power That Elevates People and Organizations,published in 2017 by Monica C. Worline, a research scientist at Stanford University’s Center for Compassion and Altruism Research and Education, and Jane E. Dutton, a professor of business administration and psychology at the University of Michigan.
Their book describes how an organization needs to have the courage to face its problems with “fierce compassion” — holding itself accountable for mistakes and performance problems and handling them directly with genuine concern and an understanding of humanity and humility.
ADVERTISEMENT
To do that takes great will, desire, and courage on behalf of the many people who make up a college or university. And how much better would we all be served if we could only stop and listen fully to one another? If we applied this principle in our evaluations processes? And if we cared enough to make it an indispensable value?