I’m Scott Carlson, a senior writer at The Chronicle covering higher ed and where it’s going. This week, I interview John Warner about his new book, and what it says about writing, teaching, education, and the workplace, and how they’re all responding to the influence of AI.
In a world with ChatGPT, why bother writing?
John Warner is a writer who has made a career out of teaching and encouraging other people to write. Along the way, Warner — who is a columnist for Inside Higher Ed and the Chicago Tribune — has issued substantial criticism of the misaligned incentives and deadening curricula in schools and colleges that lead students to develop writing that is stilted, soulless, and downright bad.
Now we live in a world where a large language model can generate a grammatically clean, decently organized school essay, business-report preface, or letter to mom with just a few simple prompts. What’s the role of writing — or writers — when generative AI can produce all the content you’ll ever need?
In his new book, More Than Words: How to Think About Writing in the Age of AI, Warner argues that “writing is thinking” — and that the use of emotion, memory, physicality, and community all allow humans to create writing that AI cannot reproduce. His book ranges across the problems that AI presents to schools, to workplaces, and to people who write for money. Educational institutions, he argues, need to make the process of learning a “root value” of writing, rather than producing the perfect essay, which drives students to ChatGPT. “In a situation where the outcome is valued more than the process,” Warner writes, “a shortcut to the desired outcome will always be attractive.”
This interview has been edited for length and clarity.
A big part of your theme in More Than Words is that product is not process — and isn’t this what you’ve been saying about education more generally over the years, in Why They Can’t Write and elsewhere?
I felt very, very prepared for the arrival of ChatGPT because my message for years has been: Why are we asking students to write to formulas? Why are we using these prescriptive templates for teaching students how to write? These have nothing to do with writing.
So it was not surprising to me that ChatGPT shows up and sounds like a student writing a five-paragraph essay, because it has probably been trained on uncountable volumes of this stuff. But it should have been a moment of alert for everybody: OK, a computer can do this. What should we humans be doing? If we decide something that a large language model can write is still worth writing — and there’s plenty of things worth writing that LLMs can write — what are we going to value about that experience that’s meaningful?
Which gets back to process: We learn through experiences and the more we’re mindful about the experiences as we’re learning, the more we learn. The crazy thing is, we know this about learning. ChatGPT should have been like a siren, but I don’t see a ton of adjustment. I see far more an uncritical embracing of it than I do any thoughts of how we’ve got to think about changing teaching.
Could you paint a picture of what a college classroom looks like that is well-adjusted to this generative AI world that’s coming?
It’s variable, but it really needs to be centered around the values of experience, process, reflection, and metacognition.
There are a couple of questions I have asked in my classes for the last 15 years: The first is, What do you know now that you didn’t know before? Literally, what knowledge have you gained? That may be writing knowledge, but it is also subject matter knowledge. They realize they now know something about the world that they didn’t know before. Then I ask, What can you do now that you could not do before? And that, for me, brings the knowledge into play. It’s like, this knowledge now allows me to do some new thing. It creates this self-reinforcing cycle where students get used to asking themselves these questions, and they just learn more. One of the things they’re learning is that they know how to learn.
I’ve become more and more interested in one of my mantras: do less that matters more. Get away from the volume of production or the number of things we do, and instead focus on the depth of things, the depth of engagement.
We can embed questions like those in subject-matter courses. I think we would do better by students: They would learn more, be happier, be more engaged, and they’d be less likely to turn to things like ChatGPT as an end run around doing work. I don’t think we can police or detect our way out of this when it comes to catching cheating and large language models. We have to give students a superior alternative.
Given the news about the decline in reading, the longstanding test orientation of schools, and the public doubts about schools and colleges, is the education system just so irrevocably broken that we just aren’t able to deal with some of these threats?
I hope not. There’s a lot of distressing evidence that this may be the case. The point you make, though, is that the incentive structure and power structure mitigates against the sort of freedom and free choice that I am advocating for students because of the way education institutions are managed, the incentives around how we move students through those institutions.
There was great enthusiasm for the predictive-analytics movement of 10 or 12 years ago: We’re going to tell students, based on how they do in these courses and the demographic information, what they should major in so we get them through college. I was significantly unenthusiastic for that from the get-go. Students should be able to go to college to figure out what they’re interested in, and we shouldn’t over worry about what a good major is. Who knows, right? The “learn to code” folks have seen coding collapse as an industry.
On the other hand, when we have a system where students are paying huge sums at great risk to their own finances to do this stuff, it’s hard not to think in these terms. In the book I wrote about making public college free, the rationale was that free college would allow everybody to think about the mission of education and learning, rather than the operations of how we collect tuition revenue, get credentials, and get into a workplace.
That notion is obviously pie in the sky at this moment. But I do see glimmers of hope at the grassroots, as there are a lot of conversations about making college meaningful — and I would include your book, Hacking College, as part of this. There’s a genuine hunger for authentic learning experiences, but it’s not supported, and it often feels like you’re getting away with something. The system could figure out how to value it. It’s just not well organized around these values.
I want to turn toward the question of becoming a professional writer in a Gen AI world. When I started in journalism as an intern, I transcribed the taped interviews of some star investigative reporters — a task that improved my typing ability, taught me the cadence of a journalism interview, and provided the support to help me get started in the field. Now transcribing software can do that faster and cheaper. How do you think AI affects these ground-level doors for young people after college?
It’s a worry. If you think about it, a lot of professions work well under what you’ve described and what I write about in the book as an apprenticeship model. When you enter into a profession or professional space, you are of some but limited use. Even at a relatively low wage, the value you provide relative to the work you’re able to accomplish for them—typing, formatting PowerPoints, proofreading documents — is relatively low. But when you and I were younger, the understanding was that we need to bring people in to apprentice them so that they can become the people occupying the senior positions. There is an inherent inefficiency that is made up for by sustainability.
All of that is called into question by the market structures. You now can create “content” and sell ads or subscriptions against it using highly automated processes, while not caring so much about the quality of news, as long as it resembles news. The private-equity takeover of so many local newspapers speaks to this. So your question scares me a little because you do wonder, Where are the skills going to come from if we don’t have a system that allows people to apprentice themselves?
I’ve been looking at some cultural artifacts around how AI companies view humans, and they think we are dumb, ignorant, and would like to be able to stay that way by using their products. You look at the Apple Intelligence ads: A guy who’s sitting at his desk, making a chain of paper clips, playing with the tape dispenser — he’s a total idiot, right? And he needs to write an email, so he writes some text-tweet nonsense into his AI interface, which gets translated into business-speak, and he sends it to his boss, who has this confused look — like he didn’t know what a genius this guy was.
This is what they think about our lives as workers, and this is not a vision we want to embrace. We have to think of this technology as potentially dangerous to our well-being. If it de-skills us or alienates us from the things that we think are important about our labor, we’re sort of signing our own death warrant.
In More Than Words, you raise some doubts about whether LLMs and other generative AI will ever be truly useful — for instance, you ask AI to write your column in your style, and it does a terrible job. Since you finished the book, have there been any advances that have made you rethink that assessment?
No — in fact, I’ve become less interested in using this technology for my work over time. I realized something, actually, while I was experimenting with The Chronicle’s AI tool: The Chronicle of Higher Education has a vast archive of articles spanning a long period, an interesting trove of information. So I asked your AI some broad questions, like, “Is tenure good?” The response was equivocal, which is probably as it should be.
But when I asked more specific things, I found what I’ve found with other large language models: The summary of information is less useful to my thinking than a single source. What I need is someone else’s unique intelligence, their individual perspective. That’s what fuels my own thinking. A general summary by AI is often at or even below the level of a Wikipedia article, which I will read if I need basic background information, so I’m no longer in a state of complete ignorance. Depending on the specific use case, this technology is marginally better — or marginally worse — than that kind of tool.
This technology is obviously powerful. It’s not vaporware. It’s not NFTs. It’s not MOOCs — some fantasy for cutting education costs. It clearly has applications. There’s this guy, Mike Caulfield, who developed a fact-checking method called SIFT and co-wrote a book about verifying online facts with a historian at Stanford. He’s doing fascinating experiments with Toulmin arguments — structured rhetorical arguments — using large language models. He’s essentially turned them into a machine for this kind of logic. Toulmin reasoning relies on patterns, and you can create a prompt that does pattern matching for you, so you can judge the outcomes. But it’s only useful because he already knows how to do this kind of reasoning.
The last chapter of my book is called “Explore,” and this is the kind of exploration we should be doing. We need people like Mike Caulfield — people with a deep foundation in thinking and reasoning — to use these tools in meaningful ways. His extensive background in misinformation and fact-checking makes him uniquely positioned to experiment with this.
The idea that we just need to train students to be prompt engineers or technicians? That’s incredibly short-sighted. Everyone I’ve seen using this technology productively has some kind of native practice underneath what they’re doing with it.
What you’re saying is that the depth and the discipline really matter — that the depth sparks ideas and leads to the synthetic thinking to see parallels in other things, the way a machine might not.
While I was working on More Than Words, someone in an interview asked me what I had learned. I learned a lot about generative AI, obviously. But what really struck me was a new appreciation for reading. I realized I had taken my ability to read for granted — not just my ability to think critically while reading, but even the fact that I can sit and concentrate for hours at a time, reading a book. It shouldn’t be viewed as a superpower, but it does seem to be a bit of a differentiator.
A lot of this, I credit to my early education. I was incredibly fortunate — I had an amazing grade school that shaped me into a thinking human being, living in a well-resourced town, with teachers who were my neighbors. My mom owned a bookstore. There was a real sense of community. But that world doesn’t exist in the same way anymore.
Today, the teachers in my city of Charleston can’t afford to live here, because you can’t survive on $38,000 a year. I don’t know what to do about that. It’s part of a much larger societal shift. The erosion of those foundational elements — education, community, real engagement — is part of what makes this technology attractive to people.