Not long ago, a tech startup created an avatar of Anne Frank urging students not to blame anyone for the Holocaust. If that sentence doesn’t make you pause and consider what we’re dealing with in education, nothing will.
We’ve entered completely new territory with generative AI and should stop using analogies to try and explain its impact. AI is not like the introduction of calculators. As Alison Gopnik so thoughtfully opined, generative AI is a cultural technology that is reshaping how we interact with information and one another. We haven’t had to deal with anything like this before in education, and AI’s impact won’t be confined to coursework.
In January, I hosted the University of Mississippi’s third AI Institute for Teachers. Faculty members arrived armed with questions about how to detect students’ use of these tools, redesign assignments, and draft syllabus policies on the topic. But I greeted them with far thornier questions: What does it mean to teach in a world where machines simulate human thought? How do we prepare students for a future in which “authenticity” is mediated by algorithms?
When confronted with tools like ChatGPT, faculty members tend to cluster around one of two extremes — uncritical acceptance of AI as inevitable or outright rejection of it as an ethical threat. But clinging to either view obscures the real challenge: how to develop thoughtful, practical approaches to deal with this shifting landscape. Generative AI is unavoidable, but its potential impact in higher ed is far from inevitable. The former speaks to the reality of our technological moment, while the latter to all the hype, much of it a sales pitch and little else. The recent news from China about the DeepSeek reasoning model shook tech and energy markets, in part, because it challenges the narrative that U.S. companies like OpenAI would always dominate this market.
Clearly the technology is not static. Professors and students must contend with generative technology as it is now, not as it is promised to be. We have no idea how teaching and learning will be affected by the new wave of AI features that mimic human reasoning (such as DeepSeek’s V3 or OpenAI’s o1 model). We’re discovering its evolving capabilities in real time.
That is why faculty members — from every department — must find a middle ground on AI between unthinking acceptance or outraged denial. My advice: Teach your students to think about what the technology does and what it might mean for their world.
What makes tools like ChatGPT unavoidable in education? AI companies have committed to release versions of their tools for free with few safeguards, in a massive public experiment that defies belief. There is no touchstone moment in educational history that compares to our current AI moment.
If you think generative AI is like MOOCs, then I invite you to have a three-minute discussion about that topic with a multimodal AI tool called Hume’s Empathetic Voice Interface. You don’t even need an account. Simply click the link and pick the synthetic persona of your choice, turn on your microphone, and start a conversation. Get emotional with it and see how quickly it responds to match your mood. Do you still believe this technology won’t profoundly change education, labor, or even society itself?
We’ve entered completely new territory with generative AI and should stop using analogies to try and explain its impact.
Many of us have wanted to take a path of actively resisting generative AI’s influence on our teaching and our students. The reasons given vary: environmental impact, energy use, economic fallout, privacy concerns, loss of vital skills. But the reason that most commonly pops up? We don’t want to participate in something many of us find fundamentally unethical and repulsive. Such anti-AI arguments are valid and make us feel like we have agency — that we can resist, protest what we believe to be unjust, and take an active stance. But can you resist something that you don’t fully understand? And to really understand this technology, you have to use it, and not just a few times.
Resistance is impractical. Refusing to use ChatGPT in your own work, or banning your students from using it, is a radical action, given that the AI technology you despise is already intertwined with all the other technology that you use every day in this highly digital world. It reminds me of the recent obsession in K-12 schools with banning cell phones while ignoring all the other types of screens in the classroom.
The laptop screen or smartphone that you are reading this on wasn’t ethically sourced or sustainably made. The labor used to mine the necessary resources and assemble the final product was invisible and exploitative — like so much of the economic forces that fuel our reality. That companies like OpenAI relied on similar cheap labor to train generative tools like ChatGPT isn’t a surprise.
But taking an ethical stance against AI creates a fantastical version of good versus bad technologies that borders on the absurd. Each of us already supports dozens of mega-corporations that offer products we use daily and that we would be hard pressed to function without. To critique AI while ignoring the unethical foundations of the other tech we use is like boycotting Starbucks while sipping a Nespresso.
Our inability to disconnect from narratives about technology and make critical, informed decisions is a problem that we should explore, and generative AI is certainly part of that — but not all of it. So the question isn’t whether to participate in these systems — you already do — but rather, how to engage with them critically and intentionally. Students deserve spaces where such inquiry is welcome. They deserve more than boilerplate policies, whether you’re an advocate of AI or an opponent.
Let’s reframe the debate. The resistance-versus-adoption binary is largely performative rather than practical. You can certainly choose to avoid tools like ChatGPT, but truly escaping AI’s influence would require constant vigilance, opting out of numerous features embedded in apps people use day in and day out. And when it comes to your students using AI, you have no real control.
Yet eagerly jumping on the AI bandwagon without any guardrails isn’t really practical either. The technology evolves so rapidly that most of us can’t keep pace. True adoption requires not just finding and paying for premium AI tools but also understanding how to meaningfully use them — all while struggling to stay current with constant updates.
Right now, generative features appear like countless potholes in the digital road. We can only swerve so often before hitting one. The road isn’t going to change; how we talk about the challenges we encounter on it should. We should all, whatever our disciplines, be advising students to be cautious and skeptical about generative technology.
Taking an ethical stance against AI creates a fantastical version of good versus bad technologies that borders on the absurd.
Jack Stilgoe, a professor of science and tech studies at Britain’s University College London, suggests a framework for talking about AI called “a Weizenbaum test” — not to gauge how intelligent AI is, but rather, to assess “the public value” and “real-world implications” of this technology. Imagine if we started such conversations in our classrooms. Stilgoe used questions first posed decades ago (“Who is the beneficiary of our much-advertised technological progress and who are its victims?”) by Joseph Weizenbaum, the MIT professor who created the first chatbot, and adapted them to today’s discussions about AI:
- Who will benefit?
- Who will bear the costs?
- What will the technology mean for future generations?
- What will the implications be — not just for economies and international security, but also for our sense of what it means to be human?
- Is the technology reversible?
- What limits should be imposed on its application?
We should all think deeply about how we frame these conversations with our students and colleagues. Doing so in these early days of generative AI has the potential to meaningfully influence campus purchasing decisions and AI policies.
Build sustainable AI literacy. Engaging with AI in higher education requires far more resources and time than anyone wants to admit. Changing how we talk about ChatGPT and other tools calls for a level of nuance that we’re not going to find on social-media feeds. None of us knows enough about generative tools to make the important decisions necessary to chart the best path forward. Among the things we need:
- Grants and other financial support from local and federal agencies.
- Policy initiatives that promote careful innovation with AI, not reckless deployments.
- Commitments from campus leaders to treat AI literacy across the curriculum as a continuum.
- Time to explore the tools and AI use cases in the classroom.
- Discussions to reach consensus over siloed positions of “for” or “against.”
The work ahead will be impossible to sustain without such support and resources. We’re not going to catch a break from AI developers — the new features and updates are going to keep coming.
Most important, we all deserve some grace here. Dealing with generative AI in education isn’t something any of us asked for. It isn’t normal. It isn’t fixable by purchasing a tool or telling faculty members they can opt out if they choose. AI is and will remain unavoidable for virtually every discipline taught at our institutions.
If one good thing happens because of generative AI, let it be that it helps us clearly see how truly complicated our existing relationships with machines are now. As difficult as this moment is, it might be what we need to prepare for a future in which machines that mimic reasoning and human emotion refuse to be ignored.