Look at any student academic-integrity policy, and you’ll find the same message: Submit work that reflects your own thinking or face discipline. A year ago, this was just about the most common-sense rule on Earth. Today, it’s laughably naïve.
There’s a remarkable disconnect between how professors and administrators think students use generative AI on written work and how we actually use it. Many assume that if an essay is written with the help of ChatGPT, there will be some sort of evidence — it will have a distinctive “voice,” it won’t make very complex arguments, or it will be written in a way that AI-detection programs will pick up on. Those are dangerous misconceptions. In reality, it’s very easy to use AI to do the lion’s share of the thinking while still submitting work that looks like your own. Once that becomes clear, it follows that massive structural change will be needed if our colleges are going to keep training students to think critically.
The common fear among teachers is that AI is actually writing our essays for us, but that isn’t what happens. You can hand ChatGPT a prompt and ask it for a finished product, but you’ll probably get an essay with a very general claim, middle-school-level sentence structure, and half as many words as you wanted. The more effective, and increasingly popular, strategy is to have the AI walk you through the writing process step by step. You tell the algorithm what your topic is and ask for a central claim, then have it give you an outline to argue this claim. Depending on the topic, you might even be able to have it write each paragraph the outline calls for, one by one, then rewrite them yourself to make them flow better.
The common fear among teachers is that AI is actually writing our essays for us, but this isn’t what happens.
As an example, I told ChatGPT, “I have to write a 6-page close reading of the Iliad. Give me some options for very specific thesis statements.” (Just about every first-year student at my university has to write a paper resembling this one.) Here is one of its suggestions: “The gods in the Iliad are not just capricious beings who interfere in human affairs for their own amusement but also mirror the moral dilemmas and conflicts that the mortals face.” It also listed nine other ideas, any one of which I would have felt comfortable arguing. Already, a major chunk of the thinking had been done for me. As any former student knows, one of the main challenges of writing an essay is just thinking through the subject matter and coming up with a strong, debatable claim. With one snap of the fingers and almost zero brain activity, I suddenly had one.
My job was now reduced to defending this claim. But ChatGPT can help here too! I asked it to outline the paper for me, and it did so in detail, providing a five-paragraph structure and instructions on how to write each one. For instance, for “Body Paragraph 1: The Gods as Moral Arbiters,” the program wrote: “Introduce the concept of the gods as moral arbiters in the Iliad. Provide examples of how the gods act as judges of human behavior, punishing or rewarding individuals based on their actions. Analyze how the gods’ judgments reflect the moral codes and values of ancient Greek society. Use specific passages from the text to support your analysis.” All that was left now was for me to follow these instructions, and perhaps modify the structure a bit where I deemed the computer’s reasoning flawed or lackluster.
The vital takeaway here is that it’s simply impossible to catch students using this process, and that for them, writing is no longer much of an exercise in thinking. The problem isn’t with a lack of AI-catching technology — even if we could definitively tell whether any given word was produced by ChatGPT, we still couldn’t prevent cheating. The ideas on the paper can be computer-generated while the prose can be the student’s own. No human or machine can read a paper like this and find the mark of artificial intelligence.
When we want students to learn how to think, assignments become essentially useless once AI gets involved.
There are two possible conclusions. One is that we should embrace the role AI is beginning to play in the writing process. “So what that essays are easier to write now? AI is here for good; students might as well learn to use it.” Of course, it’s important to learn to put together a cohesive piece of written work, so it makes perfect sense to embrace AI on assignments that are meant to teach this skill. In fact, it would be counterproductive not to: If a tool is useful and widely available, students should learn how to use it. But if this is our only takeaway, we neglect the essay’s value as a method for practicing critical thinking. When we want students to learn how to think — something I’m sure all educators consider a top priority — assignments become essentially useless once AI gets involved.
So rather than fully embracing AI as a writing assistant, the reasonable conclusion is that there needs to be a split between assignments on which using AI is encouraged and assignments on which using AI can’t possibly help. Colleges ought to prepare their students for the future, and AI literacy will certainly be important in ours. But AI isn’t everything. If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.
As it stands right now, our systems don’t accomplish either of those goals. We don’t fully lean into AI and teach how to best use it, and we don’t fully prohibit it to keep it from interfering with exercises in critical thinking. We’re at an awkward middle ground where nobody knows what to do, where very few people in power even understand that something is wrong. At any given time, I can look around my classroom and find multiple people doing homework with the help of ChatGPT. We’re not being forced to think anymore.
People worry that ChatGPT might “eventually” start rendering major institutions obsolete. It seems to me that it already has.