Academe needs all the innovation it can get these days. Yet many trial balloons and pilot projects are doomed to quick failure, often because of mutual distrust. Some administrators, wary of faculty resistance to change, are overly secretive about their plans. And many professors are suspicious of any top-down idea they had little hand in shaping.
The result: Ambitious proposals languish or disappear — not necessarily because they were lousy ideas but often because of how awkwardly they were rolled out and tested. In my previous Admin 101 essay, I focused on how to design trial balloons in ways that minimize controversy and suspicion. This month’s column offers advice on how to effectively test and evaluate your prototypes.
Good planning by good people is not always enough in the test-pilot stage. A case in point: A dean of arts and humanities at a regional state university told me about a trial balloon she’d floated to turn around some worrying enrollment declines. The plan: Increase outreach from academic departments to the region’s high schools, both by hiring additional recruiting staff and getting faculty members to attend high-school meetups with potential students and their families.
The recommendations came from a campus task force, and all the dean needed was a few faculty members for the pilot program.
But no one volunteered. Rather than drop the idea, the dean convinced the chair of a language department with a severe enrollment drop to take on the test-pilot duties personally. After a year, his program saw a marked increase in applications for the fall, and so, one by one, the other departments came aboard.
The dean’s experience reflects some of the best practices for (a) testing out a new idea on a small scale, (b) making sure both the positive and negative takeaways are measured and understood, and (c) “scaling up” the trial balloon to wider application after the testing stage. Here are some tips to guide you in the all-important prototype phase.
Continuity is important as you shift from planning to testing. It’s best to have a representative group of people involved in designing the structure, parameters, and goals of your test project. And at least some of those same folks should be involved when the prototype gets underway and it’s time to oversee and assess the outcomes.
It’s a big ask to expect faculty and staff members to serve on consecutive committees, but some degree of continuity is crucial. People who stay on from the planning to the implementation phase can maintain institutional memory and provide accountability. It especially helps if the chair of the oversight committee was a major player in the early stage and, thus, has a personal stake in the pilot project’s success and is motivated to regularly review its progress (or lack thereof).
Conversely, a major risk is that you will fall in love with your brainchild and be unable to admit its flaws. An oversight committee needs at least a few skeptics — people who are willing to be objective and acknowledge when an initiative falls short. A bad outcome — whether in scientific research or leadership experimentation — should be treated not as a failure but as evidence of how to refine the next round of solutions.
Don’t scale up too fast. Often in higher ed, prototypes are rapidly expanded before they are fully tested. That tendency may be driven by panic over pressing crises such as enrollment or fund raising. But the evaluation should not be rushed, even if an institution is under financial strain. As we know from engineering airplanes and grad programs, something that is designed too quickly, implemented haphazardly, and immediately thrown into full production is often so flawed that it no longer serves its original purpose. Thus, the first rule of a prototype: Keep it limited in scale to a few select departments before expanding.
I saw that approach work when I was head of a journalism school and our college’s new dean aimed to involve all of its roughly three dozen department chairs and directors in fund raising. I had some experience in that area, but most of the chairs and directors had little to none. Rather than abruptly announce, “Everyone must now fund raise,” the dean wisely encouraged select units to lead the effort. Those of us who had done some fund raising served as advisers, and the university’s development office provided additional coaching and support. Because we did not scale up too quickly, the innovation of shifting some fund-raising responsibility to the unit level was introduced gradually, supporting its long-term viability.
Define the metrics of success. A trial balloon, by its very nature, is a work in progress. But in order to tell whether it’s actually effective, you will need some tangible benchmarks. What outcomes will make your prototype program a “success”? What kinds of data can you collect to analyze and derive lessons from?
In the case of the humanities dean I mentioned earlier who was looking to reverse enrollment declines, she set a modest goal: Demonstrate that increased outreach to high schools would lead to a measurable rise in applications for targeted majors. A 2-percent increase in applications for a particular major would show “moderate” improvement, 5 percent would be viewed as “strong,” and 10 percent would be “exceptional.”
Don’t assume a successful test has universal relevance. A fundamental principle in management science and industrial sociology is that success can be just as dangerous as failure. Just because a test project succeeds in one context doesn’t mean it will work everywhere all at once.
That was my experience as dean of a communications college when we began moving toward offering fully online undergraduate-degree programs. We started the trial in two departments, and both worked out well. But we soon realized that we couldn’t apply the same template in every department, because each of them faced unique challenges in areas such as tech support. Writing-intensive programs had lower tech demands, while video-intensive ones required high-end instructional tools. So while the pilot project offered universal lessons for all departments, it also had to be tailored to suit varied needs.
Treat problems and failures as steps forward. In my long academic career, I’ve gotten to know many impressive scholars and artists across the disciplines. What they have in common, to put it in scientific terms, is a willingness to accept a null or an alternative hypothesis. Discovering that a theory is wrong is just as important to advancing science as proving it correct.
But for administrators, I fear, it’s getting harder and harder to admit, “Well, that didn’t work.” There’s so much pressure to pretend that trial balloons always fly beautifully and straight and are obviously the product of genius innovators. Certainly the financial and political stakes are high in academe nowadays, but prototypes exist to test what works and what doesn’t. As an administrator, you have to follow the lead of the best researchers and own up to mistakes.
Get over your fear of publicly acknowledging failure. In numerous cases, I have seen administrators admit, “OK, we’ve learned that this probably doesn’t work as a whole, but one part of it does. Let’s go back to the drawing board.” And instead of the leader “looking bad,” faculty members were impressed by the level of transparency. Trial balloons are an essential tool for academic innovation. They must be designed carefully, tested rigorously, and evaluated free of any predisposition for a desired outcome. Admitting that not everything flew right on a prototype is an opportunity to build trust, a quality that is sorely lacking on a lot of campuses.