At Kalamazoo College, Autumn Hostetter, a psychology professor, and six of her students surveyed faculty members and students to determine whether they could detect an AI-written essay, and what they thought of the ethics of using various AI tools in writing. You can read their research paper here.
The group gathered three writing samples from students, and one generated by ChatGPT after asking for a 200-word essay responding to the following prompt: “Think about how your personality affects your study habits. Specifically, does being high or low on a particular personality dimension affect how likely you are to engage in active recall when you are studying? Be sure to explain these concepts and provide examples from your life.”
They generated several essays from ChatGPT and chose the one they thought was the best, noting that a student attempting to present the work as their own would likely do the same.
Participants were not told in advance that one essay was AI-generated, but instead were asked to evaluate how well each addressed the prompt. They also rated the writing samples on several dimensions of quality, such as grammar and personal experience, with the AI-generated version often coming out in the middle.
Afterwards they were told that one essay had been written by AI and asked which one they believed it to be. Most students and professors weren’t particularly confident of their guesses, and only 29 percent guessed correctly. One thing that helped improve someone’s detection ability? Having used ChatGPT more frequently.
Cassie Linnertz, a senior and one of the paper’s authors, said she was not surprised by these results. She knew how adept ChatGPT was at mimicking human writing. And a friend who took the survey immediately recognized some common habits — like using “overall” in the summary paragraph — and correctly guessed the AI-written essay. Linnertz’s takeaway: “Professors are going to have to be much more vigilant,” she said, and make sure that what students are producing in class, through writing or discussion, is aligned with what they produce in take-home writing assignments.
Next, participants were asked about a range of scenarios, such as using Grammarly, using AI to make an outline for a paper, using AI to write a section of a paper, looking up a concept on Google and copying it directly into a paper, and using AI to write an entire paper. As expected, commonly used tools like Grammarly were considered the most ethical, while writing a paper entirely with AI was considered the least. But researchers found variation in how people approached the in-between scenarios. Perhaps most interesting: Students and faculty members shared very similar views with each scenario.
MiaFlora Tucci, a senior and another of the authors, said the results suggest that students and professors are likely to find common ground around ethical use of AI, and that involving students in discussions about its ethical uses could be helpful.
There were several scenarios, for example, where both groups saw it more as a tool than a threat. Tucci said that reflects her own experience. In her physical organic chemistry class, for example, she had difficulty understanding a concept described in her textbook. So she copied the paragraph into ChatGPT, asked for a simpler explanation, and once that was clear to her, she confirmed the explanation was accurate using Google. If she had just used Google, she said, she probably would have spent a long time reading through academic papers looking for another explanation of the concept because it was so technical.
***
A second case involving students and AI is taking place at College Unbound, a small institution focused on adult learners. Lance Eaton, director of digital pedagogy, created and facilitated two consecutive, eight-week courses running this spring, in which students design and then road test an AI-generative tools policy for the college. (You can read his post outlining the process here.)
“We want students to be as prepared as possible, so they need to be part of that conversation,” he said. “We see our students as fully capable adults who are really enmeshed in complex dynamics in their lives.”
Veronica Machado enrolled in both of the courses. Machado, who works full-time while attending college, is intrigued by the potential of AI tools in her work, which focuses on students who need behavioral and academic support. So she dove into the first course with enthusiasm, spending a lot of time testing out functions.
She and her classmates then got together to discuss what they had learned, and began hammering out a policy that would support responsible use of AI technology.
The draft policy, which provides guidelines for both students and faculty members, states that if students use these tools in their work, they must make clear what portion was generated by the AI tool and which tool they used. Students are also responsible for any negative outcomes from using the tools, such as submitting biased or inaccurate information. “In general,” the policy states, “the ideas and central components of the work should be essentially the work of the student.”
The policy notes that each professor has the right to set their own classroom-usage expectations, which may differ from these guidelines. Faculty, too, must denote if they use AI-generated coursework, and are asked to “keep a relational balance between what they ask of students in terms of how much AI-generative content can show up in student work and in their own work.” In other words, if a professor decides that no more than 25 percent of a student’s work can be generated by AI, then that should hold true for their coursework as well. They are also not allowed to put students’ work into an AI tool to solicit feedback without their consent.
This term, Machado and her classmates are testing how well the guidelines work in practice. While they are going to try to determine how easy it is to distinguish AI content, said Machado, they do not want to make the conversation about stopping students from cheating. “We have to get away from that thought process,” she said. “We want people to connect with this new era of AI.”
Are you involving students in discussions around AI usage on your campus? Write to me at beth.mcmurtrie@chronicle.com and your example may appear in a future newsletter.
AI and Disability
As instructors think about redesigning elements of their courses to address ChatGPT and other text generators, the question of how this will affect students with disabilities often comes up. In-class assessments, including oral assessments, may present problems for some students, for example. But AI tools could also be a helpful study aid. As one viewer in a recent webinar wrote: “My son has dyslexia. He uses AI as a tool to help organize his thoughts and research into cohesive writing. He says it ‘has changed his life.’”
I’d like to dive deeper into the impact of generative AI on students with disabilities. If you have thoughts on the topic, write to me at beth.mcmurtrie@chronicle.com.
Thanks for reading Teaching. If you have suggestions or ideas, please feel free to email us at beckie.supiano@chronicle.com or beth.mcmurtrie@chronicle.com.
— Beth
Learn more about our Teaching newsletter, including how to contact us, at the Teaching newsletter archive page.