Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    A Culture of Cybersecurity
    Opportunities in the Hard Sciences
    Career Preparation
Sign In
A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent. Racknitz was wrong both about the position of the operator and the dimensions of the automaton.<br/>
Wikimedia Commons

Why I’m Not Scared of ChatGPT

The limits of the technology are where real writing begins.

The Review | Essay
By Christopher Grobe January 18, 2023

Each time I embark on a new writing project, I find that I’ve forgotten how to write. I type and delete sentence fragments. I list claims in a random order — then decide that most of them are indefensible. It feels awful. I feel stupid. But from long experience, I know these feelings will eventually subside. Soon, I’ll see the outline of an argument; I’ll trace it badly, then better, then well. At some point, I’ll start imagining an audience whose phantom quibbles and confusions can be addressed by writing better.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Each time I embark on a new writing project, I find that I’ve forgotten how to write. I type and delete sentence fragments. I list claims in a random order — then decide that most of them are indefensible. It feels awful. I feel stupid. But from long experience, I know these feelings will eventually subside. Soon, I’ll see the outline of an argument; I’ll trace it badly, then better, then well. At some point, I’ll start imagining an audience whose phantom quibbles and confusions can be addressed by writing better.

This is what I value most in writing: the way it carries me from confusion to understanding, enforcing standards of clarity and persuasion along the way. I learned this by writing essays for my own humanities professors — and it’s what I now try to teach my students.

Far from replacing human intelligence, ChatGPT will provide new starting points for some of the processes we routinely use to think.

The recent release of ChatGPT, a language-generating tool from OpenAI, has inspired dark fantasies in the minds of some humanities teachers. “The College Essay Is Dead,” they declare; we are facing “The End of High-School English” — the titles of two essays from The Atlantic. But these concerns are not so much about writing, understood as a process and an adjunct to thought, as they are about writing assessment, understood as a tool for sorting students and awarding distinctions. How will we “judge” our students accurately, asks Stephen Marche, when the writing process “can be significantly automated”? What will replace writing assignments “as a gatekeeper [and] a metric for intelligence?” asks Daniel Herman. This focus on assessment then calls into existence the kind of student most easily assessed: one entirely unentangled with technology.

But if we treat learning (not distinction) as the goal of education, then generative AI looks more like an opportunity than a threat. As software that can simulate human thinking, it may indeed create some thoughtless students who rely on it too heavily. But it might also create students who are ready to think twice, to push beyond statistically likely ways of thinking. This sort of student, ready to demand more than AI can provide, will be precisely what an age of generative AI requires: people who understand the difference between human and machine intelligence, and who therefore won’t mistake its glibbest outputs for the horizon of all human thought.

In early December, I decided to prove this point by staging exactly the scenario that is giving some of my peers in the profession indigestion: I asked students to spend an hour trying to get ChatGPT to write a draft of their final projects for them. Before I set them loose, however, I wanted to model how to engage critically with ChatGPT. So, I briefly shared and analyzed my own attempts to get ChatGPT to write a final lecture for the course, a gen-ed English lecture called “Listening to Podcasts,” which introduces students to the history of podcasts and teaches them how to analyze different podcast genres across time.

After spending much of the previous evening with ChatGPT, I had landed on the following prompt for it: “Write a lecture about how podcasts are developing toward greater complexity and aesthetic ambition.” I had tried broader prompts in hopes of getting more complicated responses, but they produced only boring boilerplate. I had also tried giving it a sequence of arguments to make, but this only made each argument shallower — while also highlighting ChatGPT’s failure to sustain the logical connections I had provided between one argument and the next. Instead, I had found the most success by giving it a single, simple argument to make. That’s what I shared with my students: six paragraphs made of 430 words.

A cross-section of the Turk from Racknitz, showing how he thought the operator sat inside as he played his opponent. Racknitz was wrong both about the position of the operator and the dimensions of the automaton.<br/>
Wikimedia Commons

Let’s pause here to reflect, for a moment, on the amount of human intelligence that went into producing this “AI-generated” lecture. I had definite ideas and arguments I wanted to make. I fed those ideas and arguments into ChatGPT, assessed the output, and judged its initial responses too predictable and superficial. And so I landed on a certain way of posing a certain kind of prompt that would make ChatGPT sound maximally competent. (And I’m not even talking about the human intelligence coded into ChatGPT by the writers whose words have been fed into its language model or by the “ghost workers” who helped tag and train its data.)

Even with all that human intelligence at its back, ChatGPT still struggled to make an argument compelling to folks who had actually studied the subject for a semester. For instance, it made interpretive claims about podcast history that sounded right but, on reflection, didn’t square with the facts we knew. Such errors were usually caused by its helpless repetition of thought patterns acquired from other arenas, which simply didn’t make sense in this context. For instance, it posited that the sonic complexity of podcasts must have resulted from increasing competition in recent years, a capitalistic theory contradicted by two facts every student had learned: that sonically ambitious podcasts had existed almost from the start of that medium, and that the most financially successful podcasts today are rambling chatcasts with little in the way of editing or sound design. As a pattern replicator, ChatGPT will reproduce ideology over fact. (And, while it went for a capitalistic ideology at first, it’s an equal-opportunity pattern lover: It can, when prompted, make an equally compelling and equally incorrect Marxist critique of its own argument.)

In hiding the seams of its own relentless pattern replication, ChatGPT uses many of the same tricks I teach my students to avoid as warning signs of insufficient argument. For instance, it exclusively uses transition words like “Another,” “Additionally,” and “Over all” [sic] to start its paragraphs, which may lend an air of structure to the essay but in fact provide no logical connection between adjacent ideas. (Why “another” example — and “another” example of what, exactly?) Then, it couches every claim in ambiguous hedge words like “most,” “often,” “many,” and “some,” which ask the reader to do the writer’s work by deciding for themselves how limited or broad each claim was actually meant to be.

These problems, and more like them, are caused by what is currently the most obvious shortcoming of ChatGPT: its inability to cite and use evidence in anything resembling the way we require in the interpretive humanities. When prodded to cite specific evidence, it supplies a slightly narrower generalization. Even when supplied with specific evidence relevant to its arguments, it cannot do the work of connecting the one to the other. That’s because it is not actually dealing with facts about the world, but with the proximity of various clusters of words in a hugely multidimensional language model. It can endlessly move through the layers of that model and around each layer’s clusters of keywords, but it cannot get below these words to the facts they represent. When it appears to do so, it is usually just narrating part of its own inner architecture, as when it paraphrases a quotation you’ve given it or maps out the structure of a metaphor contained within that quotation. Being able to recognize these limitations, we are able to use them but also to think beyond them. We of course have the option to thoughtlessly assent, but we can also think twice — and isn’t that a good description of what we teach in the liberal arts?

Once we bring the hype around ChatGPT into line with reality, we can grapple with the true effects of generative AI: Far from replacing human intelligence, it will provide new starting points for some of the processes we routinely use to think. Any writers who find unbearable the blank page and blinking cursor may choose instead to put their first, dogged efforts into playing with ChatGPT. If they are patient and canny, they may succeed in getting a creditable first draft out of the machine. By that point, the work they’ve done will already be significant. But then, like any student with a first, creditable draft, they will begin the real work: revision, turning what ChatGPT wrote into what the writer actually means.

As an instructor, I have no interest in a vision of student achievement that would deem this hybrid process illegitimate. The things ChatGPT cannot do (cite and analyze evidence, limit claims, create logical links between claims, arrange those claims into a hierarchy of significance) are the basic stuff of college-level writing.

Me: ChatGPT, please finish that paragraph.

ChatGPT: However, that does not mean that ChatGPT has no value in the classroom. In fact, I believe that it can be a useful tool for helping students to develop their own writing skills and to think more critically about the ideas and arguments that they are presenting. By working with [recognizing the limits of] ChatGPT [and by pushing themselves beyond those limits] and engaging in dialogue with it, students can learn to identify and address the weaknesses in their own writing, and they can learn to structure their arguments in a more logical and coherent way. Additionally, ChatGPT can be a useful resource for generating ideas and prompts for writing assignments, as well as for providing feedback and support during the writing process. Overall, I believe that ChatGPT has the potential to be a valuable tool for student learning and development, as long as it is used in a way that complements and enhances, rather than replacing, traditional methods of instruction. [what is unique to their thinking, and to the way that human beings think. An age of generative AI, helplessly replicating conventional wisdom and ideology, is an age that will need the uniquely, resistantly human more than ever.]

A version of this article appeared in the February 3, 2023, issue.
We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Technology Innovation & Transformation Teaching & Learning
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
Christopher Grobe
Christopher Grobe is an associate professor of English at Amherst College.
ADVERTISEMENT
ADVERTISEMENT

More News

Illustration showing three classical columns on stacks of coins, at different heights due to the amount of coins stacked underneath
Data
These 35 Colleges Could Take a Financial Hit Under Republicans’ Expanded Endowment Tax
Illustration showing details of a U.S. EEOC letter to Harvard U.
Bias Allegations
Faculty Hiring Is Under Federal Scrutiny at Harvard
Illustration showing nontraditional students: a pregnant worman, a soldier; a working professional; an elderly man; and a woman with an artificial leg
'Unique Needs'
Common App Takes an In-Depth Look at Independent Students
Photo-based illustration of a Sonoma State University clock structure that's fallen into a hole in a $100 bill.
Campus Crossroads
Sonoma State U. Is Making Big Cuts to Close a Budget Hole. What Will Be Left?

From The Review

Solomon-0512 B.jpg
The Review | Essay
The Conscience of a Campus Conservative
By Daniel J. Solomon
Illustration depicting a pendulum with a red ball featuring a portion of President Trump's face to the left about to strike balls showing a group of protesters.
The Review | Opinion
Trump Is Destroying DEI With the Same Tools That Built It
By Noliwe M. Rooks
Illustration showing two men and giant books, split into two sides—one blue and one red. The two men are reaching across the center color devide to shake hands.
The Review | Opinion
Left and Right Agree: Higher Ed Needs to Change
By Michael W. Clune

Upcoming Events

Ascendium_06-10-25_Plain.png
Views on College and Alternative Pathways
Coursera_06-17-25_Plain.png
AI and Microcredentials
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin