When I was a kid, my favorite poem was Shel Silverstein’s “The Homework Machine,” which summed up my childhood fantasy: a machine that could do my homework at the press of a button. Decades later that technology, the innocuously titled GPT-3, has arrived. It threatens many aspects of university education — above all, college writing.
The web-based GPT-3 software program, which was developed by an Elon Musk-backed nonprofit called OpenAI, is a kind of omniscient Siri or Alexa that can turn any prompt into prose. You type in a query — say, a list of ingredients (what can I make with eggs, garlic, mushrooms, butter, and feta cheese?) or a genre and prompt (write an inspiring TED Talk on the ways in which authentic leaders can change the world) — and GPT-3 spits out a written response. These outputs can be astonishingly specific and tailored. When asked to write “a song protesting inhumane treatment of animals in the style of Bob Dylan,” the program clearly draws on themes from Dylan’s “Blowin’ in the Wind:”
How many more creatures must suffer?
How many more must die?
Before we open up our eyes
And see the harm we’re causing?
When asked to treat the same issue in the style of Shakespeare, it produces stanzas of iambic tetrameter in appropriately archaic English:
By all the gods that guide this Earth
By all the stars that fill the sky
I swear to end this wretched dearth
This blight of blood and butchery.
GPT-3 can write essays, op-eds, Tweets, jokes (admittedly just dad jokes for now), dialogue, advertisements, text messages, and restaurant reviews, to give just a few examples. Each time you click the “submit” button, the machine learning algorithm pulls from the wisdom of the entire internet and generates a unique output, so that no two end products are the same.
The quality of GPT-3’s writing is often striking. I asked the AI to “discuss how free speech threatens a dictatorship, by drawing on free speech battles in China and Russia and how these relate to the First Amendment of the U.S. Constitution.” The resulting text begins, “Free speech is vital to the success of any democracy, but it can also be a thorn in the side of autocrats who seek to control the flow of information and quash dissent.” Impressive.
If anyone can produce a high-quality essay using an AI system, then what’s the point of spending four years (and often a lot of money) getting a degree?
From an essay written by the GPT-3 software program
The current iteration of GPT-3 has its quirks and limitations, to be sure. Most notably, it will write absolutely anything. It will generate a full essay on “how George Washington invented the internet” or an eerily informed response to “10 steps a serial killer can take to get away with murder.” In addition, it stumbles over complex writing tasks. It cannot craft a novel or even a decent short story. Its attempts at scholarly writing — I asked it to generate an article on social-role theory and negotiation outcomes — are laughable. But how long before the capability is there? Six months ago, GPT-3 struggled with rudimentary queries, and today it can write a reasonable blog post discussing “ways an employee can get a promotion from a reluctant boss.”
Since the output of every inquiry is original, GPT-3’s products cannot be detected by anti-plagiarism software. Anyone can create an account for GPT-3. Each inquiry comes at a cost, but it’s usually less than a penny — and the turnaround is instantaneous. Hiring someone to write a college-level essay, in contrast, currently costs $15 to $35 per page. The near-free price point of GPT-3 is likely to entice many students who would otherwise be priced out of essay-writing services.
It won’t be long before GPT-3, and the inevitable copycats, infiltrate the university. The technology is just too good and too cheap not to make its way into the hands of students who would prefer not to spend an evening perfecting the essay I routinely assign on the leadership style of Elon Musk. Ironic that he has bankrolled the technology that makes this evasion possible.
To help me think through what the collision of AI and higher ed might entail, I naturally asked GPT-3 to write an op-ed “exploring the ramifications of GPT-3 threatening the integrity of college essays.” GPT-3 noted, with mechanical unself-consciousness, that it threatened to “undermine the value of a college education.” “If anyone can produce a high-quality essay using an AI system,” it continued, “then what’s the point of spending four years (and often a lot of money) getting a degree? College degrees would become little more than pieces of paper if they can be easily replicated by machines.”
The effects on college students themselves, the algorithm wrote, would be mixed: “On the positive side, students would be able to focus on other aspects of their studies and would not have to spend time worrying about writing essays. On the negative side, however, they will not be able to communicate effectively and will have trouble in their future careers.” Here GPT-3 may actually be understating the threat to writing: Given the rapid development of AI, what percent of college freshmen today will have jobs that require writing at all by the time they graduate? Some who would once have pursued writing-focused careers will find themselves instead managing the inputs and outputs of AI. And once AI can automate that, even those employees may become redundant. In this new world, the argument for writing as a practical necessity looks decidedly weaker. Even business schools may soon take a liberal-arts approach, framing writing not as career prep but as the foundation of a rich and meaningful life.
So what is a college professor to do? I put the question to GPT-3, which acknowledged that “there is no easy answer to this question.” Still, I think we can take some sensible measures to reduce the use of GPT-3 — or at least push back the clock on its adoption by students. Professors can require students to draw on in-class material in their essays, and to revise their work in response to instructor feedback. We can insist that students cite their sources fully and accurately (something that GPT-3 currently can’t do well). We can ask students to produce work in forms that AI cannot (yet) effectively create, such as podcasts, PowerPoints, and verbal presentations. And we can design writing prompts that GPT-3 won’t be able to effectively address, such as those that focus on local or university-specific challenges that are not widely discussed online. If necessary, we could even require students to write assignments in an offline, proctored computer lab.
Eventually, we might enter the “if you can’t beat ‘em, join ‘em” phase, in which professors ask students to use AI as a tool and assess their ability to analyze and improve the output. (I am currently experimenting with a minor assignment along these lines.) A recent project on Beethoven’s 10th symphony suggests how such projects might work. When he died, Beethoven had composed only 5 percent of his 10th symphony. A handful of Beethoven scholars fed the short, completed section into an AI that generated thousands of potential versions of the rest of the symphony. The scholars then sifted through the AI-generated material, identified the best parts, and pieced them together to create a complete symphony. To my somewhat limited ear, it sounds just like Beethoven.