Shelby Kendrick, a teaching assistant at the University of California at Berkeley, started playing around with ChatGPT in February, a few months after the large language model appeared, making headlines for turning out relatively high-quality prose in response to just a few basic prompts.
The knowledge she gained came in handy this spring, when she, two other TAs, and a professor suspected some students of using ChatGPT in their survey course on the history of architecture. They would soon learn that students had used it in a range of ways. A couple had typed an assignment prompt into ChatGPT and submitted the AI essay with no changes. Others had directed it to write more tailored papers, with arguments and examples they provided. Still others used it to help generate ideas but copied a lot of the text ChatGPT had produced into their work. And a few students, most of whom spoke English as a second language, used it to polish their writing.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
Shelby Kendrick, a teaching assistant at the University of California at Berkeley, started playing around with ChatGPT in February, a few months after the large language model appeared, making headlines for turning out relatively high-quality prose in response to just a few basic prompts.
The knowledge she gained came in handy this spring, when she, two other TAs, and a professor suspected some students of using ChatGPT in their survey course on the history of architecture. They would soon learn that students had used it in a range of ways. A couple had typed an assignment prompt into ChatGPT and submitted the AI essay with no changes. Others had directed it to write more tailored papers, with arguments and examples they provided. Still others used it to help generate ideas but copied a lot of the text ChatGPT had produced into their work. And a few students, most of whom spoke English as a second language, used it to polish their writing.
With detection tools and some common-sense observations, Kendrick, a doctoral student of architecture, and her colleagues flagged 13 of their 125 students for using AI-generated text. The text often contained content not covered in class, was grammatically correct but lacked substance and creativity, and used awkward phrasing, such as adding “thesis statement” before turning out a generic thesis.
Rather than confront most of these students directly, the instructors told everyone that they had conducted an in-depth review of submissions and would allow students to redo the assignment without penalty if they admitted to using ChatGPT. All but one of the 13 came forward, plus one who had not been flagged.
ADVERTISEMENT
Cheating has always been a challenge for professors to navigate. But as Kendrick’s experience illustrates, ChatGPT and other generative AI systems have added layers of complexity to the problem. Since these user-friendly programs first appeared in late November, faculty members have wrestled with many new questions even as they try to figure out how the tools work.
Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate?
The silence about AI on campus is shocking. Nationwide, college administrators don’t seem to fathom just how existential AI is to higher education.
Some news articles, surveys, and Twitter threads suggest that cheating with ChatGPT has become rampant in higher education, and professors already feel overwhelmed and defeated. But the real story appears to be more nuanced. The Chronicle asked readers to share their experiences with ChatGPT this semester to find out how students had been using it, if instructors saw much cheating, whether they had incorporated ChatGPT into their teaching through discussions or assignments, and how they planned to modify their coursework to reckon with AI this fall. More than 70 people wrote in.
Responses were all over the map.
ADVERTISEMENT
A small number of professors considered any use of AI to be cheating. “IT IS PLAGIARISM. FULL STOP,” wrote Shannon Duffy, a senior lecturer in the history department at Texas State University. “I am infuriated with colleagues that can’t seem to see this — they are muddying the message for our students.”
A few others have embraced ChatGPT in their teaching, arguing that they need to prepare their students for an AI-infused world. “Like anything, when you forbid a use, students want to use it more,” wrote Kerry O’Grady, an associate professor of public relations at Columbia University. “If you welcome use in appropriate ways, they feel empowered to use AI appropriately.”
Many faculty, though, remain uncertain: willing to consider ways in which these programs could be of some value, but only if students fully understand how they operate.
“I could see this being a tool for an experienced practitioner to push their capabilities in directions they might not ordinarily consider,” wrote William Crosbie, an associate professor of arts and design at Raritan Valley Community College, in New Jersey. “But for novice users it gives the impression of quality with nothing upholding that impression.”
Instances of AI use were often easy to spot, instructors said. They noticed their students’ writing often becoming more sophisticated and error-free, overnight. Essays and discussion posts might mention topics that had never been covered in class. Summaries of readings were incorrect. A few professors said their many years of teaching experience — one termed it “Spidey sense” — helped identify AI-written work.
ADVERTISEMENT
To prove students had cheated, though, was time consuming. A number of professors said they ran suspicious work through Turnitin’s new AI detector, although that is far from foolproof. Like many detectors, the program has been criticized for turning out false positives. Turnitin has stated that its results alone should not be a determination of cheating and has released new guidance, updating analysis of its accuracy based on usage since its release in April. And it’s clear that many instructors saw the tool as a starting point for investigation.
Professors would also compare their students’ writing with prior work, run the original assignment through ChatGPT to see if any passages it produced looked similar to what the student turned in, or meet with the student directly to share their concerns.
Aimee Huard, chair of the social-science department at Great Bay Community College, in New Hampshire, described AI detection as “an arduous process,” because instructors had to compare problematic work with other assignments submitted by the student over the year. She wondered how her department, which found 12 incidents of AI usage in 53 courses, was going to manage this challenge and educate students about proper use of the tools in a consistent way, especially given how many part-time adjunct instructors teach there. She was looking for, among other things, “tips for how to not lose one’s mind trying to ‘catch’ students or outsmart them in assignments and courses.”
In some cases, if an instructor felt sure the work was not a student’s own, they simply gave the student a zero. Others — in part because AI tools are so new — used the incidents as teaching opportunities, speaking directly with a student they suspected of passing off AI-generated work as their own, or with their class as a whole after such problems arose. Using real-life examples, they could show students how and why ChatGPT had failed to do what they should have done themselves.
Lorie Paldino, an assistant professor of English and digital communications at the University of Saint Mary, in Leavenworth, Kan., described how she asked one student, who had submitted an argument-based research essay, to bring to her the printed and annotated articles they used for research, along with the bibliography, outline, and other supporting work. Paldino then explained to the student why the essay fell short: It was formulaic, inaccurate, and lacked necessary detail. The professor concluded with showing the student the Turnitin results and the student admitted to using AI.
ADVERTISEMENT
“I approached the conversation as a learning experience,” Paldino wrote. “The student learned that day that AI does not read and analyze sources, pulling direct quotes and relevant facts/data to synthesize with other sources into coherent paragraphs … AI also lies. It makes up information if it doesn’t know something.” In the end, the student rewrote the paper.
Sometimes, though, professors who felt they had pretty strong evidence of AI usage were met with excuses, avoidance, or denial.
Bridget Robinson-Riegler, a psychology professor at Augsburg University, in Minnesota, caught some obvious cheating (one student forgot to take out a reference ChatGPT had made to itself) and gave those students zeros. But she also found herself having to give passing grades to others even though she was pretty sure their work had been generated by AI (the writings were almost identical to each other).
She plans to show her next class that she’s aware of what such prose looks like, even though she expects students will simply edit the output more carefully. “But at least they will have to read it and dummy it down so they may learn something from that process,” she wrote. “Not sure there is much I can do to fix it. Very defeated.”
Christy Snider, an associate professor of history at Berry College, in Georgia, suspected several students of using AI, and called three of them in for meetings. Two denied and one admitted it.
ADVERTISEMENT
“One of the people who denied it said the reasons why her answers were wrong was because she didn’t read the book carefully so just made up answers,” Snider wrote. “I gave them all 0 but didn’t turn any of them in for academic integrity violations because although I was sure all three used it — I wasn’t sure my fellow faculty members would back me up if I couldn’t prove 100% it was cheating.”
Snider’s case illustrates another point that many faculty members made: Whether or not they could prove a student used AI, they often gave the work low marks because it was so poorly done.
“At the end of the day AI wasn’t really the biggest issue,” wrote Matthew Swagler, an assistant professor of history at Connecticut College, who strongly suspected two students in an upper-level seminar of using AI in writing assignments. “The reason they had to rewrite them was because they hadn’t actually worked closely with the reading to answer the prompt.”
Another common finding: Professors realized they needed to get on top of the issue more quickly. It wasn’t enough to wait until problems arose, some wrote, or to simply add an AI policy to their syllabus. They had to talk through scenarios with their students.
Swagler, for example, had instituted a policy that students could use a large language model for assistance, but only if they cited its usage. But that wasn’t sufficient to prevent misuse, he realized, nor prevent confusion among students about what was acceptable. Some students worried, for example, that using Grammarly without citing it would be considered cheating.
ADVERTISEMENT
He initiated a class discussion, which was beneficial: “It became clear that the line between which AI is acceptable and which is not is very blurry, because AI is being integrated into so many apps and programs we use. … I didn’t have answers for all of their questions and concerns but it helped to clear the air.”
After responding to students’ use of ChatGPT on the fly during the past academic year, this summer is professors’ window to plan their courses with it in mind. Responses to The Chronicle’s online form capture a number of ways they’re doing so.
The instructors who filled out the form are not a representative sample, and may have stronger views on the topic than faculty members as a whole. Still, their answers give a sense of which responses to ChatGPT and other generative AI systems are common:
Nearly 80 percent of respondents indicated plans to add language to their syllabi about the appropriate use of these tools.
Almost 70 percent said they planned to change their assignments to make it harder to cheat using AI.
Nearly half said they planned to incorporate the use of AI into some assignments to help students understand its strengths and weaknesses.
Around 20 percent said they’d use AI themselves to help design their courses.
Just one person indicated plans to carry on without changing anything.
A number of professors noted that they hadn’t yet gotten much guidance from their departments or colleges, but they hoped more would be coming during the summer.
“The silence about AI on campus is shocking,” wrote Derek Lee Nelson, an adjunct professor at Everett Community College, in Washington State. “Nationwide, college administrators don’t seem to fathom just how existential AI is to higher education.”
Another professor was frustrated by the lack of “actual practical how-to-suggestions” for time-strapped faculty members already teaching heavy loads.
ADVERTISEMENT
Professors have come up with a variety of ways to try to reduce the likelihood of students cheating with AI. Some plan to have students do more work in class, or to rework assignments so that students draw on personal experiences or other material that AI had less access to.
Susan Rosalsky, an associate professor and assistant chair of the English department at Orange County Community College (SUNY Orange), in New York, plans to do more in-class writing — and to incorporate class activities that “ask students to assess examples of computer generated prose.” She is hoping that she can also “spur conversation and awareness” within her department.
Janine Holc thinks that students are much too reliant on generative AI, defaulting to it, she wrote, “for even the smallest writing, such as a one sentence response uploaded to a shared document.” As a result, wrote Holc, a professor of political science at Loyola University Maryland, “they have lost confidence in their own writing process. I think the issue of confidence in one’s own voice is something to be addressed as we grapple with this topic.”
To ensure students practice writing without ChatGPT, Holc is making some significant changes. “For the coming year I am switching to all in-class writing and all hand writing, using project-based learning,” she wrote. She’ll ask staff how best to work with students who need accommodations.
Helena Kashleva, an adjunct instructor at Florida SouthWestern State College, spots a sea-change in STEM education, noting that many assignments in introductory courses serve mainly to check students’ understanding. “With the advent of AI, grading such assignments becomes pointless.”
ADVERTISEMENT
With that in mind, Kashleva plans to either remove such assignments or ask for a specific, personal opinion as part of the response to make it harder for students to rely entirely on the technology.
Faculty members were clearly caught off guard this semester with inappropriate use of AI among their students. So it’s no surprise that many feel the need to set some ground rules next semester, starting on Day 1.
Shaun James Russell hopes to receive more guidance from his department over the summer, but in the meantime, he’s drafted a policy for his “Introduction to Poetry” course this fall.
“As a non-tenured professor of writing and literature,” wrote Russell, a senior lecturer in the English department at Ohio State University, “I *do* have some mild concerns about how ChatGPT could eventually cause powers-that-be to think that writing is less of a university-wide essential skill down the road ... but I also think that the field will need to embrace and work with AI, rather than try to ban it outright.”
Still, he’s asking the students in his poetry class not to use it. On his syllabus, Russell plans to say: “Generative AI is here, and surely here to stay. You may be tempted to use it at some point in the semester, but I ask that you do not. Most of what we do in this course develops your own analytical skills and insights, and the two major written assignments are fundamentally about your interpretations of poetry.”
ADVERTISEMENT
Other professors are considering ways to incorporate AI into their teaching.
Julie Morrison, chair and professor of psychology and director of assessment at Glendale Community College, wrote that she is “spending this summer figuring out how we can use it as a tool.” One resource she hopes to draw on: her 16-year-old son, who is “really into AI.”
Already, Morrison has played around with how students might use the tool to get started on a research project for her course: brainstorming research questions, and looking around for psychological scales to measure the outcomes — self-efficacy, say, or depression — they’re interested in. She’s also working with a colleague who is looking at other AI tools “that might spice up a presentation or help with data visualization,” Morrison wrote.
O’Grady, the Columbia professor, also wants to help students learn to use AI effectively. She explains that AI can help them come up with ideas, refine their understanding of a tricky concept, or spark their creativity. But she cautions them against using it to write — or as a replacement for lectures.
O’Grady, who also works on a team that provides pedagogical support to faculty members, has encouraged her colleagues to use generative AI in their own work. “AI can help with lesson planning,” she wrote, “ including selecting examples, reviewing key concepts before class, and helping with teaching/activity ideas.” This, she says, can help professors save both time and energy.
ADVERTISEMENT
Amid the confusion caused by the introduction of ChatGPT and other AI tools one thing is clear. What professors and academic leaders do this summer and fall will be pivotal in determining whether they can find the line separating appropriate use from outright abuse of AI.
Given how widely faculty members vary on what kinds of AI are OK for students to use, though, that may be an impossible goal. And of course, even if they find common ground, the technology is evolving so quickly that policies may soon become obsolete. Students are also getting more savvy in their use of these tools. It’s going to be hard for their instructors to keep up.
Beth McMurtrie is a senior writer for The Chronicle of Higher Education, where she focuses on the future of learning and technology’s influence on teaching. In addition to her reported stories, she is a co-author of the weekly Teaching newsletter about what works in and around the classroom. Email her at beth.mcmurtrie@chronicle.com and follow her on LinkedIn.
Beckie Supiano is a senior writer for The Chronicle of Higher Education, where she covers teaching, learning, and the human interactions that shape them. She is also a co-author of The Chronicle’s free, weekly Teaching newsletter that focuses on what works in and around the classroom. Email her at beckie.supiano@chronicle.com.