Each time I embark on a new writing project, I find that I’ve forgotten how to write. I type and delete sentence fragments. I list claims in a random order — then decide that most of them are indefensible. It feels awful. I feel stupid. But from long experience, I know these feelings will eventually subside. Soon, I’ll see the outline of an argument; I’ll trace it badly, then better, then well. At some point, I’ll start imagining an audience whose phantom quibbles and confusions can be addressed by writing better.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
Each time I embark on a new writing project, I find that I’ve forgotten how to write. I type and delete sentence fragments. I list claims in a random order — then decide that most of them are indefensible. It feels awful. I feel stupid. But from long experience, I know these feelings will eventually subside. Soon, I’ll see the outline of an argument; I’ll trace it badly, then better, then well. At some point, I’ll start imagining an audience whose phantom quibbles and confusions can be addressed by writing better.
This is what I value most in writing: the way it carries me from confusion to understanding, enforcing standards of clarity and persuasion along the way. I learned this by writing essays for my own humanities professors — and it’s what I now try to teach my students.
Far from replacing human intelligence, ChatGPT will provide new starting points for some of the processes we routinely use to think.
The recent release of ChatGPT, a language-generating tool from OpenAI, has inspired dark fantasies in the minds of some humanities teachers. “The College Essay Is Dead,” they declare; we are facing “The End of High-School English” — the titles of two essays from The Atlantic. But these concerns are not so much about writing, understood as a process and an adjunct to thought, as they are about writing assessment, understood as a tool for sorting students and awarding distinctions. How will we “judge” our students accurately, asks Stephen Marche, when the writing process “can be significantly automated”? What will replace writing assignments “as a gatekeeper [and] a metric for intelligence?” asks Daniel Herman. This focus on assessment then calls into existence the kind of student most easily assessed: one entirely unentangled with technology.
But if we treat learning (not distinction) as the goal of education, then generative AI looks more like an opportunity than a threat. As software that can simulate human thinking, it may indeed create some thoughtless students who rely on it too heavily. But it might also create students who are ready to think twice, to push beyond statistically likely ways of thinking. This sort of student, ready to demand more than AI can provide, will be precisely what an age of generative AI requires: people who understand the difference between human and machine intelligence, and who therefore won’t mistake its glibbest outputs for the horizon of all human thought.
In early December, I decided to prove this point by staging exactly the scenario that is giving some of my peers in the profession indigestion: I asked students to spend an hour trying to get ChatGPT to write a draft of their final projects for them. Before I set them loose, however, I wanted to model how to engage critically with ChatGPT. So, I briefly shared and analyzed my own attempts to get ChatGPT to write a final lecture for the course, a gen-ed English lecture called “Listening to Podcasts,” which introduces students to the history of podcasts and teaches them how to analyze different podcast genres across time.
After spending much of the previous evening with ChatGPT, I had landed on the following prompt for it: “Write a lecture about how podcasts are developing toward greater complexity and aesthetic ambition.” I had tried broader prompts in hopes of getting more complicated responses, but they produced only boring boilerplate. I had also tried giving it a sequence of arguments to make, but this only made each argument shallower — while also highlighting ChatGPT’s failure to sustain the logical connections I had provided between one argument and the next. Instead, I had found the most success by giving it a single, simple argument to make. That’s what I shared with my students: six paragraphs made of 430 words.
Let’s pause here to reflect, for a moment, on the amount of human intelligence that went into producing this “AI-generated” lecture. I had definite ideas and arguments I wanted to make. I fed those ideas and arguments into ChatGPT, assessed the output, and judged its initial responses too predictable and superficial. And so I landed on a certain way of posing a certain kind of prompt that would make ChatGPT sound maximally competent. (And I’m not even talking about the human intelligence coded into ChatGPT by the writers whose words have been fed into its language model or by the “ghost workers” who helped tag and train its data.)
Even with all that human intelligence at its back, ChatGPT still struggled to make an argument compelling to folks who had actually studied the subject for a semester. For instance, it made interpretive claims about podcast history that sounded right but, on reflection, didn’t square with the facts we knew. Such errors were usually caused by its helpless repetition of thought patterns acquired from other arenas, which simply didn’t make sense in this context. For instance, it posited that the sonic complexity of podcasts must have resulted from increasing competition in recent years, a capitalistic theory contradicted by two facts every student had learned: that sonically ambitious podcasts had existed almost from the start of that medium, and that the most financially successful podcasts today are rambling chatcasts with little in the way of editing or sound design. As a pattern replicator, ChatGPT will reproduce ideology over fact. (And, while it went for a capitalistic ideology at first, it’s an equal-opportunity pattern lover: It can, when prompted, make an equally compelling and equally incorrect Marxist critique of its own argument.)
ADVERTISEMENT
In hiding the seams of its own relentless pattern replication, ChatGPT uses many of the same tricks I teach my students to avoid as warning signs of insufficient argument. For instance, it exclusively uses transition words like “Another,” “Additionally,” and “Over all” [sic] to start its paragraphs, which may lend an air of structure to the essay but in fact provide no logical connection between adjacent ideas. (Why “another” example — and “another” example of what, exactly?) Then, it couches every claim in ambiguous hedge words like “most,” “often,” “many,” and “some,” which ask the reader to do the writer’s work by deciding for themselves how limited or broad each claim was actually meant to be.
These problems, and more like them, are caused by what is currently the most obvious shortcoming of ChatGPT: its inability to cite and use evidence in anything resembling the way we require in the interpretive humanities. When prodded to cite specific evidence, it supplies a slightly narrower generalization. Even when supplied with specific evidence relevant to its arguments, it cannot do the work of connecting the one to the other. That’s because it is not actually dealing with facts about the world, but with the proximity of various clusters of words in a hugely multidimensional language model. It can endlessly move through the layers of that model and around each layer’s clusters of keywords, but it cannot get below these words to the facts they represent. When it appears to do so, it is usually just narrating part of its own inner architecture, as when it paraphrases a quotation you’ve given it or maps out the structure of a metaphor contained within that quotation. Being able to recognize these limitations, we are able to use them but also to think beyond them. We of course have the option to thoughtlessly assent, but we can also think twice — and isn’t that a good description of what we teach in the liberal arts?
Once we bring the hype around ChatGPT into line with reality, we can grapple with the true effects of generative AI: Far from replacing human intelligence, it will provide new starting points for some of the processes we routinely use to think. Any writers who find unbearable the blank page and blinking cursor may choose instead to put their first, dogged efforts into playing with ChatGPT. If they are patient and canny, they may succeed in getting a creditable first draft out of the machine. By that point, the work they’ve done will already be significant. But then, like any student with a first, creditable draft, they will begin the real work: revision, turning what ChatGPT wrote into what the writer actually means.
As an instructor, I have no interest in a vision of student achievement that would deem this hybrid process illegitimate. The things ChatGPT cannot do (cite and analyze evidence, limit claims, create logical links between claims, arrange those claims into a hierarchy of significance) are the basic stuff of college-level writing.
Me: ChatGPT, please finish that paragraph.
ChatGPT:However, that does not mean that ChatGPT has no value in the classroom. In fact, I believe that it can be a useful tool for helping students to develop their own writing skills and to think more critically about the ideas and arguments that they are presenting. By working with [recognizing the limits of] ChatGPT [and by pushing themselves beyond those limits] and engaging in dialogue with it, students can learn to identify and address the weaknesses in their own writing, and they can learn to structure their arguments in a more logical and coherent way. Additionally, ChatGPT can be a useful resource for generating ideas and prompts for writing assignments, as well as for providing feedback and support during the writing process. Overall, I believe that ChatGPT has the potential to be a valuable tool for student learning and development, as long as it is used in a way that complements and enhances, rather than replacing, traditional methods of instruction. [what is unique to their thinking, and to the way that human beings think. An age of generative AI, helplessly replicating conventional wisdom and ideology, is an age that will need the uniquely, resistantly human more than ever.]