To the Editor:
Recently, a student article appeared in these pages which declared how easy it was for students to use AI-generated text such as ChatGPT to “do the lion’s share of the thinking” for work they were doing in their courses (“I’m a Student. You Have No Idea How Much We’re Using ChatGPT,” The Chronicle Review, May 12). The student detailed the process his classmates were using — relying on AI to generate arguments and outlines but then editing the output so the final product is “work that looks like your own.” He was confident in his systematic approach to cheating and stated it was “simply impossible to catch students using this process.”
The article caught my eye not only because the writer is studying at my alma mater, Columbia University, but also because I’m the chief product officer at Turnitin — one of the world’s most frequently used integrity backstops. Our systems check millions of academic assignments every day for insights and identification of similarity, writing voice, and the hallmarks of AI text creation.
Because of this work, I can just as confidently — in fact more confidently say not only is it possible, but probable for AI-generated text to be detected.
In fact, in just the first month that our AI detection system was available to educators, we flagged more than 1.3 million academic submissions as having more than 80 percent of their content likely written by AI, flags that alert educators to take a closer look at the submission and then use the information to aid in their decision-making.
Modifying text created by AI does blur the signal that detection systems use to find it. But it does not do so reliably or absolutely, especially if students use other AI programs such as paraphrasing applications to make the changes. Layering an application on top of an AI system hoping to erase its signature is just substituting one AI for another.
Moreover, even if you succeed in sneaking past an AI detector or your professor, academic work lives forever, meaning that you’re not just betting you are clever enough, or your process elegant enough, to fool the checks that are in place today — you’re betting that no technology will be good enough to catch it tomorrow. That’s not a good bet. Remember that as generative AI gains sophistication, so do detectors — especially those that have a very distinct purpose such as detecting generative AI in academic writing.
While the science of generative AI and AI detection is fascinating, it does not change the perpetual and vital truth that the purpose and power of your studies is to learn by doing the work. Using AI to avoid that work and evade detection inhibits learning. Refer to a recent action by the Texas Supreme Court allowing universities to strip degrees from students caught cheating. Retroactively. For those who cheat, that should put a chill in your bones.
All this being said, I do not mean to imply that there is no place for using AI text creation in education. Many educators are using the tools creatively, proactively, and effectively, innovation that should be encouraged and continuously refined.
To be sure, there are places where the process of writing may not be as valuable. Some business settings come to mind. Does it matter how an earnings report is created? Does the originality of a weekly summary of closed business deals matter? Probably not. But I am a manager of many and I can tell you that I need people who can write effectively to communicate arguments, recommendations, and risks.
This requires learning skills about the process of writing. At this moment, the productive, constructive conversation should center on how to thoughtfully and beneficially add new technologies into existing learning systems instead of how they can be used to replace actual thinking for students to fool themselves and their teachers.
As is often the case when new technologies capture markets and imaginations, misinformation and bad advice follows. AI-generated text engines such as ChatGPT are no different. Don’t believe everything you hear or read about what is or is not detectable.
Chief Product Officer