Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    University Transformation
Sign In
Photo-based illustration of the sculpture, The Thinker, interlaced with anotehr image of a robot posed as The Thinker with bits of binary code and red strips weaved in.
Illustration by The Chronicle; Getty

What I Learned Serving on My University’s AI Committee

We need to embrace a more radical response.
The Review | Essay
By Megan Fritts May 23, 2025

Over the course of the past two years, university committees focused on the impact of artificial intelligence have assembled across the country. By now, nearly every research university has compiled some form of group-based academic “response” to the virtual time bomb posing an existential threat to higher education. The purposes of these committees are many, varied, and often unclear. The University of California system’s massive and multifurcated AI initiative

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Over the course of the past two years, university committees focused on the impact of artificial intelligence have assembled across the country. By now, nearly every research university has compiled some form of group-based academic “response” to the virtual time bomb posing an existential threat to higher education. The purposes of these committees are many, varied, and often unclear. The University of California system’s massive and multifurcated AI initiative declares its goal to be “reimagining and improving higher education for the 21st century through the deployment of responsible AI.” The AI Task Force at Yale explains that “rather than waiting to see how AI will develop, we encourage our colleagues … to proactively lead the development of AI by utilizing, critiquing, and examining the technology.” The University of Iowa is home to five distinct AI committees promising to “embrace the advancements proactively, manage risks and challenges, and prepare for impacts of this transformative technology.” The academic-AI committee in particular will “work to identify ways to incorporate AI into the curriculum and enhance the learning experience for students.” These are three examples of the likely hundreds of university AI committees that exist, with thousands of university faculty and administrators meeting week after week, month after month, to do … something.

At the University of Arkansas at Little Rock, where I am a philosophy professor, there are several AI-response committees. I am a member of two of them. One of these committees is composed of faculty in the College of Humanities, Arts, Social Sciences, and Education — the programs most affected by the sophisticated large language models (LLMs) that are writing our students’ papers. Our committee’s original goal was to work toward aligning our individual and college programs with the AI policy that would be handed down to us from the top levels of the university. But that university policy never came, so our goals shifted. Now part of our focus is on creating individualized AI policies for each department. We discuss the accuracy of various AI detectors: Is it ethical to use these tools, given that they are not 100 percent reliable? What can we do if a student appeals the charge of academic dishonesty? A fair bit of time is spent tweaking the wording of some proposed syllabus policy — should Grammarly count as prohibited AI? And sometimes we have more abstract debates about whether LLMs should be considered an important tool for equity, or a monstrous environmental disaster, or — a popular position — both.

But for the most part, what we spend the most time doing is simply talking to one another about our experiences, as educators, with students using AI for their work. While many on the committee bemoan the increased stress of grading student work, others see opportunities for creative assignments involving human-AI collaboration and benefits for students who speak English as a second language. Many also sense a sort of pedagogical necessity in letting — encouraging, even — students to use AI “responsibly,” believing that mastering high-efficiency LLMs is something students need to learn in college to be ready for the work force. Various ideas for “rethinking assessment” are bandied about, eliciting suggestions of video essays, grading based on classroom discussion, assignments that include a list of where and how AI was used, and so on. During one of these discussions, I remember offhandedly remarking, “Sure, but I mean, they still need to learn how to write a paper.” It was not until after the resulting awkward pause that I began to see all of these committee meetings were dancing around the unspoken follow-up question that hung dangling from the end of my statement: “Do they?”

There are many in the humanities, and even more outside the humanities, who would argue that what is important to assess are thoughts, ideas, creative capacity itself, and that being nitpicky about ChatGPT wrongly shifts the focus to what is essentially just a matter of words. There are many others, like me, who disagree. Despite the intelligence and integrity of each member of my college’s AI committee, our progress (of which there has been little, if any) could never outpace the AI advancements with which we have been working to coexist. But keeping up with advances in AI technology is not the biggest challenge we face. To come up with a good AI policy for a university, a department or even a household, one first has to have an idea of which skills and formative experiences they are prepared to lose for the sake of AI use, and which ones they will fight to retain. And it’s here that we have discovered that consensus is most importantly lacking.

In a recent essay published in The New Yorker called “Will the Humanities Survive Artificial Intelligence?” D. Graham Burnett, who teaches the history of science at Princeton University, reflected on his experience using ChatGPT to understand some dense academic material:

Increasingly, the machines best us in this way across nearly every subject. … I’m a book-reading, book-writing human — trained in a near-monastic devotion to canonical scholarship across the disciplines of history, philosophy, art, and literature. I’ve done this work for more than thirty years. And already the thousands of academic books lining my offices are beginning to feel like archeological artifacts. Why turn to them to answer a question? They are so oddly inefficient, so quirky in the paths they take through their material.

Burnett poses explicitly the question I can feel hanging in the air of AI-committee meetings, like a thick blanket of humidity that nobody wants to admit is making them sweat: If this robot can write these books better than we can, then what are we doing here?

Of course, many still question the possibility of creative AI capable of producing truly inventive art, philosophy, and writing. Yet it is undeniable that the results of these LLMs are becoming more impressive, more difficult to distinguish from human work. And once this possibility has been raised, it is impossible to put it back in the box. If we can generate valid arguments, beautiful art, and compelling novels at the level of human experts — possibly better! — by entering a simple prompt into a text box, is there any point in continuing to do these things ourselves? Even if there is, does the value of studying the humanities justify the time spent on them and dollars used funding them? We in these disciplines are now being forced to articulate to politicians, administrative boards, donors and prospective students the aims and value of our study. Perhaps some of us, like Burnett, cannot. Perhaps we’re not sure. What has become clear to me in the AI committees is that, even among those in the same discipline, any attempt at such an articulation is far from univocal.

There are some things we can agree on. A viral post on X written by Matt Dinan, an associate professor in the great-books program at St. Thomas University in New Brunswick, Canada, reads: “The honest B or C student, submitting essays filled with awkward constructions, malapropisms, earnest, if failed or surface-y arguments — a surprise hero of the present age.” The enthusiastic reaction to Dinan’s post among exasperated and defeated faculty around the world was revealing of a certain longing. For a lot of us, our motivation to enter academe was primarily about helping to form students as people. We’re not simply frustrated by trying to police AI use, the labor of having to write up students for academic dishonesty, or the way that reading student work has become a rather nihilistic task (as an article published in New York magazine this week on the ubiquity of AI cheating so vividly demonstrates). Our frustration is not merely that we don’t care about what AI has to say and therefore get bored grading; it is that we actively miss reading the thoughts of our human students.

The question I can feel hanging in the air of AI committee meetings: If this robot can write these books better than we can, then what are we doing here?

But in these committee rooms — unless you are unlucky enough to be on a committee with me — discussions about the aim of humanities education being a personally transformative experience are probably not happening. Of course, our longing for earnest-if-middling student papers does not fit neatly into assessment procedures, lists of learning objectives, or student-evaluation questions. Writing well is difficult — difficult for the student to learn, difficult for professors to facilitate, and difficult for programs to market. If a student wants to become acquainted with the art of self-expression in their free time, we might think, Well, that’s nice — a bit luxurious, perhaps — but LLMs will do for the everyman.

ADVERTISEMENT

But to be an everyman is still to be a (hu)man, and some would argue that the very essence of humanity entails something that, in fact, LLMs will not do at all. The philosopher Ludwig Wittgenstein maintained that “the speaking of language is part of an activity, or of a form of life,” pointing throughout his Philosophical Investigations at a two-way formative relationship between our life of language and our life of action. Human life forms human language because human activity gives rise to the need for communication tools — that part is intuitive. But language also helps to mold human activity — our nominal language, intimately connected to and arising from our community’s “form of life,” works to shape this life by providing the categories of thought that determine how we see the world. Taking up this line of thought, the philosopher and political theorist Alasdair MacIntyre argues in his thesis, “The Significance of Moral Judgments,” that we learn what it means to act morally by existing within a linguistic community that imbues a shared set of value categories. Human linguistic expression arises from the human form of life and also shapes the human form of life. There is arguably nothing more distinctly human about a life than that it is filled to the brim with language.

We actively miss reading the thoughts of our human students.

Most of you who are reading this piece likely had moments as young children, or during adolescence, of being suddenly struck by the newfound ability to communicate something to someone else. The burgeoning capacity for speech often goes hand in hand with new observations and clarity about the world. The baby realizes that his mother is separate from him, and soon “Mama!” comes tumbling fervently from his lips. As a child, no sooner had books captured my rapt attention than I began trying to write them — as a teenager, the same phenomenon, with the result being notebooks full of witheringly bad poetry. As a philosopher, I work to elucidate concepts that may lie on the edge of language; once they are communicated, I can see more clearly what I had set out to dig up.

Comparisons are sometimes made between LLMs and calculators, to make the point that AI bans are as futile and philistine as calls to return to the abacus. But the work that we bypass when using a calculator is less important than what we bypass when using an AI language generator for writing. To be a human self, a human agent, is to be a linguistic animal. Popular theories of mind would have us think that we learn words and attach them to ideas that we already have, but the opposite is closer to the truth: To learn to use language just is to learn how to think and move about in the world. When we stop doing this — when our needs for communication are met by something outside of us, a detached mouthpiece to summon, describe, and regale — the intimate connection between thought and language disappears. It is not only the capacity for speech that we will slowly begin to lose under such conditions, but our entire inward lives.

ADVERTISEMENT

In my more fatalistic moments, I am inclined to think that the existential crisis for education posed by emerging technology was bound to arise from the very conception of the modern university. A picture of liberal education that focused on, as W.E.B. Du Bois endorsed, the pursuit of “transcendentals” like “Truth, Beauty, and Goodness” has been all but obliterated by a much more utilitarian understanding of the purpose of education. Humanities faculty now earn our positions by producing research, attain tenure primarily through research, and assess student learning through their research “artifacts.” Like good laborers we could explain the value of our classes to skeptical administrators, deans or STEM faculty: We’re not so different from you, we say, but instead of lab reports or lines of code, our students’ product is essays, poems, arguments. But as the new generations of AI — particularly OpenAI o3 and Google Gemini 2.5 Pro — began producing solutions to tough logic puzzles, striking art, and seemingly insightful expositions on the thought of figures like Immanuel Kant, this product-based picture of the value of the humanities gives rise to a pretty obvious question: What if you could have the product without first forming human producers?

If the core aim of our disciplines really is the artifact — essays, books, arguments, talks, poems — as many hold, then we have every reason to expect our imminent replacement by AI. Even if this technology never produces anything at the level of a Hegel or an Octavia Butler, it will be — perhaps already is — good enough. And “good enough + efficient/cheap” always beats “best” when business is on the line.

If, on the other hand, the aim of our disciplines is the formation of human persons, then the real threat AI poses is not one of job replacement or grading frustration or having to reimagine assignments but something entirely different. From this perspective, language-generating AI, whether it is utilized to write emails or dissertations, stands as an enemy to the human form of life by coming between the individual and her words.

Burnett, in his New Yorker article, described AI bans in classrooms as “simply, madness,” arguing that the significance and ubiquity of the technology made its use in the classroom a kind of pedagogical obligation. There is a sense in which I agree with Burnett — it may well be that there is no place left for what I do. The humanities are the study of humane pursuits, of goods unique, and uniquely essential, to human life. And human life is not winning any popularity contests recently. Elon Musk recently posted that “it increasingly appears that humanity is a biological bootloader for digital superintelligence.” The fervor to escape the human experience shows up in the rising popularity of biohacking, the increasing terror of aging, and society-wide derision of the dependent elderly. Perhaps part of the reason for the speed and recklessness with which AI has been taken up is that many of us are now ready to opt out of the human experience altogether, as if human existence were an evolutionary stage that could soon be left behind.

ADVERTISEMENT

But if we are not yet ready to make such a concession, then universities will need to embrace a much more radical response to AI than has so far been contemplated. Preserving art, literature, and philosophy will require no less than the creation of an environment totally and uncompromisingly committed to abolishing the linguistic alienation created by AI and reintroducing students to the indispensability of their own voice. Because of the ubiquity of AI technology, students will likely be using it persistently outside the classroom in their personal lives. The humanities classroom must be a place where these tools for offloading the task of genuine expression are forbidden — stronger, where their use is shunned, seen as a faux pas of the deeply different norms of a deeply different space.

Even if this technology never produces anything at the level of a Hegel or an Octavia Butler, it will be — perhaps already is — good enough. And “good enough” always beats “best” when business is on the line.

This is precisely where humanities faculty on AI committees can make a difference: these radical policies will never be given the time of day by university administrators unless we in these disciplines can present a united front regarding the true aim and importance of a humanities education. It is risky, I acknowledge, to admit that our departments are not in the business of producing new products or supplying students with expertise that will increase their earning potential after college. But at a time when higher-education funding is on the chopping block, the prospect of AI will make the alternatives even riskier. If our deans and boards of directors think that our primary goal is to produce — arguments, manuscripts, essays — then the ways in which we are deskilled and depersoned by AI will have no obvious negative impact on meeting these objectives. Why not, in fact, promote AI use to help make talk titles more attention-grabbing, help clean up paragraphs of “oddly inefficient” monographs, or suggest ways to fix the meter of some verses of poetry?

Humanities faculty on AI committees must resolve to be honest about what is at stake in these policies. We must not shirk our own duties of authentic self-expression, settling for a watered-down compromise that seems humbler, safer, or more “serious.” If AI is allowed to expand its presence in the humanities classroom, I will put my money on the bleakest predictions of education prognosticators. Given what is on the near horizon, it seems to me that, even in the riskiest setting, we stand to lose relatively little by being bold and honest about the true nature of our work. While universities still exist, and the humanities are still taught, we have everything to gain.

This essay was first published in The Point.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Technology The Workplace Student Success Opinion
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
Megan Fritts
Megan Fritts is an assistant professor of philosophy at the University of Arkansas at Little Rock.
ADVERTISEMENT
ADVERTISEMENT

More News

Black and white photo of the Morrill Hall building on the University of Minnesota campus with red covering one side.
Finance & operations
U. of Minnesota Tries to Soften the Blow of Tuition Hikes, Budget Cuts With Faculty Benefits
Photo illustration showing a figurine of a football player with a large price tag on it.
Athletics
Loans, Fees, and TV Money: Where Colleges Are Finding the Funds to Pay Athletes
Photo illustration of a donation jar turned on it's side, with coins spilling out.
Access & Affordability
Congressional Republicans Want to End Grad PLUS Loans. How Might It Affect Your Campus?
Florida Commissioner of Education Manny Diaz, Jr. delivers remarks during the State Board of Education meeting at Winter Park High School, Wednesday, March 27, 2024.
Executive Privilege
In Florida, University Presidents’ Pay Goes Up. Is Politics to Blame?

From The Review

Photo-based illustration of a tentacle holding a microscope
The Review | Essay
In Defense of ‘Silly’ Science
By Carly Anne York
Illustration showing a graduate's hand holding a college diploma and another hand but a vote into a ballot box
The Review | Essay
Civics Education Is Back. It Shouldn’t Belong to Conservatives.
By Timothy Messer-Kruse
Photo-based illustration of a hedges shaped like dollar signs in various degrees of having been over-trimmed by a shadowed Donald Trump figure carrying hedge trimmers.
The Review | Essay
What Will Be Left of Higher Ed in Four Years?
By Brendan Cantwell

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: A Global Leadership Perspective
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin