Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    AI and Microcredentials
Sign In
Advice

Adopt or Resist? Beyond the AI Culture Wars

How to find a middle ground about a technology that is, and will remain, unavoidable for virtually every discipline.

By Marc Watkins February 20, 2025
illustration of a classical column against sky/clouds, with the image becoming pixelated as you look left to right
Randy Lyhus for The Chronicle

Not long ago, a tech startup created an avatar of Anne Frank urging students not to blame anyone for the Holocaust. If that sentence doesn’t make you pause and consider what we’re dealing with in education, nothing will.

We’ve entered completely new territory with generative AI and should stop using analogies to try and explain its impact. AI is

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Not long ago, a tech startup created an avatar of Anne Frank urging students not to blame anyone for the Holocaust. If that sentence doesn’t make you pause and consider what we’re dealing with in education, nothing will.

We’ve entered completely new territory with generative AI and should stop using analogies to try and explain its impact. AI is not like the introduction of calculators. As Alison Gopnik so thoughtfully opined, generative AI is a cultural technology that is reshaping how we interact with information and one another. We haven’t had to deal with anything like this before in education, and AI’s impact won’t be confined to coursework.

In January, I hosted the University of Mississippi’s third AI Institute for Teachers. Faculty members arrived armed with questions about how to detect students’ use of these tools, redesign assignments, and draft syllabus policies on the topic. But I greeted them with far thornier questions: What does it mean to teach in a world where machines simulate human thought? How do we prepare students for a future in which “authenticity” is mediated by algorithms?

When confronted with tools like ChatGPT, faculty members tend to cluster around one of two extremes — uncritical acceptance of AI as inevitable or outright rejection of it as an ethical threat. But clinging to either view obscures the real challenge: how to develop thoughtful, practical approaches to deal with this shifting landscape. Generative AI is unavoidable, but its potential impact in higher ed is far from inevitable. The former speaks to the reality of our technological moment, while the latter to all the hype, much of it a sales pitch and little else. The recent news from China about the DeepSeek reasoning model shook tech and energy markets, in part, because it challenges the narrative that U.S. companies like OpenAI would always dominate this market.

Clearly the technology is not static. Professors and students must contend with generative technology as it is now, not as it is promised to be. We have no idea how teaching and learning will be affected by the new wave of AI features that mimic human reasoning (such as DeepSeek’s V3 or OpenAI’s o1 model). We’re discovering its evolving capabilities in real time.

That is why faculty members — from every department — must find a middle ground on AI between unthinking acceptance or outraged denial. My advice: Teach your students to think about what the technology does and what it might mean for their world.

What makes tools like ChatGPT unavoidable in education? AI companies have committed to release versions of their tools for free with few safeguards, in a massive public experiment that defies belief. There is no touchstone moment in educational history that compares to our current AI moment.

If you think generative AI is like MOOCs, then I invite you to have a three-minute discussion about that topic with a multimodal AI tool called Hume’s Empathetic Voice Interface. You don’t even need an account. Simply click the link and pick the synthetic persona of your choice, turn on your microphone, and start a conversation. Get emotional with it and see how quickly it responds to match your mood. Do you still believe this technology won’t profoundly change education, labor, or even society itself?

We’ve entered completely new territory with generative AI and should stop using analogies to try and explain its impact.

Many of us have wanted to take a path of actively resisting generative AI’s influence on our teaching and our students. The reasons given vary: environmental impact, energy use, economic fallout, privacy concerns, loss of vital skills. But the reason that most commonly pops up? We don’t want to participate in something many of us find fundamentally unethical and repulsive. Such anti-AI arguments are valid and make us feel like we have agency — that we can resist, protest what we believe to be unjust, and take an active stance. But can you resist something that you don’t fully understand? And to really understand this technology, you have to use it, and not just a few times.

Resistance is impractical. Refusing to use ChatGPT in your own work, or banning your students from using it, is a radical action, given that the AI technology you despise is already intertwined with all the other technology that you use every day in this highly digital world. It reminds me of the recent obsession in K-12 schools with banning cell phones while ignoring all the other types of screens in the classroom.

ADVERTISEMENT

The laptop screen or smartphone that you are reading this on wasn’t ethically sourced or sustainably made. The labor used to mine the necessary resources and assemble the final product was invisible and exploitative — like so much of the economic forces that fuel our reality. That companies like OpenAI relied on similar cheap labor to train generative tools like ChatGPT isn’t a surprise.

But taking an ethical stance against AI creates a fantastical version of good versus bad technologies that borders on the absurd. Each of us already supports dozens of mega-corporations that offer products we use daily and that we would be hard pressed to function without. To critique AI while ignoring the unethical foundations of the other tech we use is like boycotting Starbucks while sipping a Nespresso.

Our inability to disconnect from narratives about technology and make critical, informed decisions is a problem that we should explore, and generative AI is certainly part of that — but not all of it. So the question isn’t whether to participate in these systems — you already do — but rather, how to engage with them critically and intentionally. Students deserve spaces where such inquiry is welcome. They deserve more than boilerplate policies, whether you’re an advocate of AI or an opponent.

Let’s reframe the debate. The resistance-versus-adoption binary is largely performative rather than practical. You can certainly choose to avoid tools like ChatGPT, but truly escaping AI’s influence would require constant vigilance, opting out of numerous features embedded in apps people use day in and day out. And when it comes to your students using AI, you have no real control.

ADVERTISEMENT

Yet eagerly jumping on the AI bandwagon without any guardrails isn’t really practical either. The technology evolves so rapidly that most of us can’t keep pace. True adoption requires not just finding and paying for premium AI tools but also understanding how to meaningfully use them — all while struggling to stay current with constant updates.

Right now, generative features appear like countless potholes in the digital road. We can only swerve so often before hitting one. The road isn’t going to change; how we talk about the challenges we encounter on it should. We should all, whatever our disciplines, be advising students to be cautious and skeptical about generative technology.

Taking an ethical stance against AI creates a fantastical version of good versus bad technologies that borders on the absurd.

Jack Stilgoe, a professor of science and tech studies at Britain’s University College London, suggests a framework for talking about AI called “a Weizenbaum test” — not to gauge how intelligent AI is, but rather, to assess “the public value” and “real-world implications” of this technology. Imagine if we started such conversations in our classrooms. Stilgoe used questions first posed decades ago (“Who is the beneficiary of our much-advertised technological progress and who are its victims?”) by Joseph Weizenbaum, the MIT professor who created the first chatbot, and adapted them to today’s discussions about AI:

  • Who will benefit?
  • Who will bear the costs?
  • What will the technology mean for future generations?
  • What will the implications be — not just for economies and international security, but also for our sense of what it means to be human?
  • Is the technology reversible?
  • What limits should be imposed on its application?

We should all think deeply about how we frame these conversations with our students and colleagues. Doing so in these early days of generative AI has the potential to meaningfully influence campus purchasing decisions and AI policies.

ADVERTISEMENT

Build sustainable AI literacy. Engaging with AI in higher education requires far more resources and time than anyone wants to admit. Changing how we talk about ChatGPT and other tools calls for a level of nuance that we’re not going to find on social-media feeds. None of us knows enough about generative tools to make the important decisions necessary to chart the best path forward. Among the things we need:

  • Grants and other financial support from local and federal agencies.
  • Policy initiatives that promote careful innovation with AI, not reckless deployments.
  • Commitments from campus leaders to treat AI literacy across the curriculum as a continuum.
  • Time to explore the tools and AI use cases in the classroom.
  • Discussions to reach consensus over siloed positions of “for” or “against.”

The work ahead will be impossible to sustain without such support and resources. We’re not going to catch a break from AI developers — the new features and updates are going to keep coming.

Most important, we all deserve some grace here. Dealing with generative AI in education isn’t something any of us asked for. It isn’t normal. It isn’t fixable by purchasing a tool or telling faculty members they can opt out if they choose. AI is and will remain unavoidable for virtually every discipline taught at our institutions.

If one good thing happens because of generative AI, let it be that it helps us clearly see how truly complicated our existing relationships with machines are now. As difficult as this moment is, it might be what we need to prepare for a future in which machines that mimic reasoning and human emotion refuse to be ignored.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Teaching & Learning Technology Student Success
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
Marc Watkins
Marc Watkins is assistant director of academic innovation at the University of Mississippi, where he directs the AI institute for teachers. He writes regularly about generative AI in education on his newsletter Rhetorica.
ADVERTISEMENT
ADVERTISEMENT

More News

Photo illustration showing Santa Ono seated, places small in the corner of a dark space
'Unrelentingly Sad'
Santa Ono Wanted a Presidency. He Became a Pariah.
Illustration of a rushing crowd carrying HSI letters
Seeking precedent
Funding for Hispanic-Serving Institutions Is Discriminatory and Unconstitutional, Lawsuit Argues
Photo-based illustration of scissors cutting through paper that is a photo of an idyllic liberal arts college campus on one side and money on the other
Finance
Small Colleges Are Banding Together Against a Higher Endowment Tax. This Is Why.
Pano Kanelos, founding president of the U. of Austin.
Q&A
One Year In, What Has ‘the Anti-Harvard’ University Accomplished?

From The Review

Photo- and type-based illustration depicting the acronym AAUP with the second A as the arrow of a compass and facing not north but southeast.
The Review | Essay
The Unraveling of the AAUP
By Matthew W. Finkin
Photo-based illustration of the Capitol building dome propped on a stick attached to a string, like a trap.
The Review | Opinion
Colleges Can’t Trust the Federal Government. What Now?
By Brian Rosenberg
Illustration of an unequal sign in black on a white background
The Review | Essay
What Is Replacing DEI? Racism.
By Richard Amesbury

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: a Global Leadership Perspective
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin