Across academe, we’re worried about AI and hoping our jobs won’t be the ones lost to it. In the same week that a professor friend half-jokingly speculated that ChatGPT would replace all administrators in higher ed, an administrator friend, also tongue-in-cheek, wondered whether we would need any faculty members to teach courses five years from now.
Personally, I hope both predictions prove false. But anyone reading The Chronicle, or teaching college courses, is aware that pretty much everything we do in academe could be affected by the maelstrom of new technologies known as generative AI. And whatever happens, administrators are going to be front and center.
Just last month, a story in The Chronicle showed an “AI hiring spree” is underway, as colleges “face stiff competition” to build faculty expertise in this arena. Wealthy institutions, the report said, are “snapping up hundreds of new graduates and courting faculty members from peer institutions. Some are erecting new AI-focused institutes and centers. And even at colleges where new hires may not be in the budget, the eagerness to advance teaching in the field has sparked training programs, and incentives for existing faculty members to experiment with AI technologies.”
All of which means that generative AI is just one more crisis that campus leaders have to deal with, but it may very well be the one with the most long-term effects on the very nature of institutions and careers — everyone’s — in higher education.
In the Admin 101 column, I’ve been delving (a word that J.R.R. Tolkien, ChatGPT, and I all apparently love) into the increased anxiety and high turnover rate in academic administration in recent years, and the resulting rise in temporary appointments. Decisions about AI are a major new source of stress. With that in mind, I suggest that administrators approach this uncertain terrain with three aims: to provide leadership, preserve your mental health, and protect the vocation that we all love.
Accept that the AI revolution is here, and it’s rapidly evolving. We are facing a technology that is advancing from its original potential at remarkable speed. The vast majority of our undergraduates are already using ChatGPT, Claude, Dall-E, Midjourney, Wolfram Alpha, Copilot, and other such tools to solve math problems and write papers. There is no way back to the halcyon days of 2019 when administrators just struggled with plain old budgetary and political crises.
We don’t yet know how AI will change the way we do things. But it seems very likely to eventually affect every part of the institution — admissions, academics, groundskeeping, and sports, to name just a few. So, first of all, as an academic administrator, you must take the lead in handing out reality checks on your campus: Disabuse AI skeptics of the notion that “business as usual” is an acceptable response, and rein in the AI cheerleaders who would blow the budget on this new tech.
Organize the conversation about AI but don’t impose solutions. One of the ways in which administrators mislead themselves is in thinking that change can’t occur unless we are the change agent. But how your institution deals with these new tools will require not just tech experts but people who know something about communication, structural thinking, curiosity, copy editing and proofreading, logic, and critical thinking. No one department owns all of those, and just about every discipline can contribute to some of them.
In short, there is a lot of AI expertise and a lot of native tinkering and discovery already on your campus. This is not something for which you need to bring in a horde of high-priced consultants to tell us what to do.
By all means, convene task forces, hold open forums, ask for input from the departments, and so on. But don’t institute top-down AI policies on your own. The geniuses with the best ideas and solutions are likely skateboarding past you on the quad or teaching first-year Spanish.
There’s a reason we have drug trials. Because generative AI is a technology in serious flux, it would be an even bigger mistake for administrators to rush in and purchase software for the whole campus or devise universitywide policies that could quickly become outdated.
I’ve spent time perusing websites where AI designers and users chat, and I’ve talked with my former students who work in the AI sector. So I’ve seen a lot of complaints about the worsening quality of successive (or in their mind “regressive”) versions of ChatGPT and other tools. Many of the programs seem to be veering back and forth in terms of restrictions on usage, quality of output, and censorship (for example, not allowing content that might contain negative stereotypes or be politically controversial or sexually explicit). Part of that results from the companies’ differentiating between the expensive “enterprise” versions (licensed en masse to clients) versus the free or low-cost versions available to anyone.
To take one unintentionally funny example: In August, I tested a new AI program that advertises itself as “Your Undetectable AI Writer: Generate high-quality, engaging content in seconds. … Get started now and transform your essay and research writing process!” The company claimed that its tool was being used by Harvard and Cornell (I took that to mean it had been used by undergraduates at those universities). Its website featured a sample paragraph of “enhanced” writing that contained the following creative new vocabulary: contious, scemtofic, expension, tecnology and Twcnhology, naviagtion, and cruical.
Adding to the complexity: An AI tool that adds value when applied to teaching and research in, say, chemistry courses might be different from what works in history or music classes. Countless startups are trying to sell AI programs to specialized customers such as law firms, restaurants, and, yes, colleges. Each sales pitch is some version of “this will solve all your problems.”
Maybe in a year or three, that will prove true of some new or updated AI tool. But not yet.
The point is: Don’t get stampeded into campuswide solutions or mindless AI boosterism. You can’t build a highway with a cement compound that you designed last Thursday (well you can, but it might fall apart in a year). Instead, find the money for experimentation, and start warning your students that newer isn’t always better when it comes to AI “twcnhology.”
Study the potential indirect costs and risks of AI. When I did a quick review of news stories on AI adoption in the workplace, I found a lot of remarkably spotty and even alarming outcomes. The enthusiasm of top managers for all the money they think they can save from these tools has led to a lot of bad and rushed decisions.
A good friend of mine who is an IT manager for a campus said he now spends half of his day trying to convince administrators that AI is not going to magically solve the institution’s problems without a huge investment of money, time, training, and other resources. You don’t want to be an evangelist for bad AI that leads to workplace fiascoes and cost overruns.
Administrators also have to think through the many potential indirect costs before advocating a particular AI approach or product. For example: What are the proposal’s implications for an institution’s energy capacity and costs? One dirty secret of AI is its high energy consumption. As a group of researchers put it, the adoption of AI enterprise models will come at a “steep cost to the environment, given the amount of energy these systems require and the amount of carbon that they emit.”
The campus power grid is about to be severely taxed even if institutions do nothing. Most colleges and universities cannot afford for everyone on the campus to use AI continuously — not just for software-licensing reasons but in terms of the power bill.
Be a spokesperson for fundamental academic values. In most businesses, when the introduction of an AI tool to internal work systems is botched, the “only” outcomes are lost revenues and unhappy workers/clients. Those concerns can be fixed or assuaged. But higher education has far more at stake. You can’t redo a flopped course. It is prohibitively expensive and time-consuming to rerun a big, complicated lab experiment and repair the damage to a team of faculty members, postdocs, and grad students. We have one chance to get it right for a lot of people who trust us with their futures or those of their children.
It’s delusional to believe that AI will do wonderful things at no cost to jobs and salaries. In my own field of communications, generative AI promises to eliminate a lot of jobs and create other ones. The same might be true of many other disciplines, too, ones as varied as accounting, contract law, mechanical engineering, or French.
This issue is complicated, almost mind-numbingly so. My conversations with former students who are on the front lines of AI development tend to last two to three hours, and, afterward, I feel the need to soak my head in ice water. So I feel your pain as administrators worried about making the wrong choices on this divisive and confusing front.
Your job is neither to “blow up the university and build a new one in its ashes” nor to stick your head in the sand. Rather, it’s to start the ball rolling. Get people tackling these complicated questions on your campus, engage the stand-out thought leaders on AI, and organize your institution’s search for solutions that balance costs, benefits, and values.