As faculty continue to debate how artificial intelligence might disrupt academic integrity, the popular plagiarism-detection service Turnitin announced on Monday that its products will now detect AI-generated language in assignments.
Turnitin’s software scans submissions and compares them to a database of past student essays, publications, and materials found online, and then generates a “similarity report” assessing whether a student inappropriately copied other sources.
The company says the new feature will allow instructors to identify the use of tools like ChatGPT with “98-percent confidence.”
There is no option to turn off the feature, a Turnitin spokesperson told The Chronicle. The company has made an exception to suppress the AI detection for a select number of customers with unique needs or circumstances, but it didn’t specify for whom such exceptions would be made. The tool is available to more than 10,000 institutions, including many colleges and K-12 schools, and 2.1-million educators, according to the company.
Chris Caren, chief executive of Turnitin, said in a statement that educators have told the company being able to accurately detect text written using artificial intelligence is their highest priority.
“They need to be able to detect AI with very high certainty to assess the authenticity of a student’s work and determine how to best engage with them,” Caren said. “It is equally important that detection technology becomes a seamless part of their existing workflow.”
Most colleges, departments, and individual faculty members have not developed guidelines yet on how AI tools like ChatGPT should be used in the classroom, according to a recent survey. So detection software could be helpful in the short term “to keep the dam from breaking” as professors continue to discuss what to do about ChatGPT, said Michael Rettinger, president emeritus at the International Center for Academic Integrity, an organization founded to combat cheating, plagiarism, and academic dishonesty in higher ed.
But the feature’s usefulness will decline as ChatGPT and other AI tools become more mainstream, Rettinger said. That might happen sooner rather than later: ChatGPT debuted in November and has quickly grown to over 100-million active monthly users.
Sarah Eaton, an associate professor of education at the University of Calgary who studies academic integrity, said detection software could soon become “futile” as artificial intelligence is increasingly used to draft and edit human writing — or the other way around.
“Really soon, we’re not going to be able to tell where the human ends and where the robot begins, at least in terms of writing,” Eaton said.
For the time being, higher ed will likely be playing catch-up until professors figure out how to integrate artificial intelligence into their classrooms. Then, the challenge will be teaching students how to use it and coming up with new methods of assessment, Rettinger said.
“In the long run, it is absolutely incumbent upon us as a higher-ed sector to change the way we think about writing and assessment as the result of these changes in technology,” Rettinger said.
While the AI-detection feature could be helpful in the immediate term, it could also lead to a surge in academic-misconduct cases, Eaton said. Colleges will have to figure out what to do with those reports at a moment when professors have yet to find consensus on how ChatGPT should be dealt with in their classrooms.
But banning tools like ChatGPT is pointless, Eaton said.
“This technology is here — it’s ubiquitous,” Eaton said. “If we want to prepare our students for a present and a future where AI is part of their reality, then this is something that we’re going to need to confront.”