Open your favorite social-media platform and you’ll see dozens of threads from faculty members in anguish over students using ChatGPT to cheat. Generative AI tools aren’t going away, and neither is the discourse that using them is academically dishonest. But beyond that issue is another worth considering: What can our students learn about writing — and their own writing process — through the open use of generative AI in the college classroom?
In Ted Chiang’s recent essay in The New Yorker, “Why AI Isn’t Going to Make Art,” he aptly describes students using AI to avoid learning and the dire effect that has on their skills development: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”
Learning requires friction, resistance, and even failure. Some three decades ago, Robert A. Bjork, a psychologist at the University of California at Los Angeles, coined the term “desirable difficulty” to describe the benefits that students get from doing increasingly challenging tasks to enhance their learning. ChatGPT removes many of those desirable difficulties by offering the user a frictionless experience: Prompt AI with a question, and get an instant answer. The student’s brain offloads everything to an algorithm.
Given that reality, how can you as a faculty member respond? In my first column, “Why We Should Normalize Open Disclosure of AI Use,” I noted that students are eager for standards because they want to use the technology openly and ethically. So the first step in responding is to set those standards in your own courses, and “normalize” disclosure of AI usage.
Here I will focus on the second step: how to introduce a bit of intentional friction into your students’ use of AI and find ways for them to demonstrate their learning when using the technology. Educators including Leon Furze, Katie Conrade, and Jane Rosenzweig have all written about the need to keep friction as a feature of the college classroom and not let generative tools automate learning.
In my own courses and as director of an AI institute for instructors at my university, I’ve adopted and suggested this method: As part of the assignment, require students to critically evaluate how they used the technology and how it affected their writing process. That way, they aren’t just passively relying on AI-generated content but meaningfully assessing its role in their writing.
Usage of a tool like ChatGPT often obscures the most critical aspects of a student’s writing process, leaving the instructor uncertain about which skills were used. So I created a form — the AI-Assisted Learning Template — to guide students in evaluating their own AI use on a particular assignment.
On the template, I first ask students to “highlight how you used human and machine skills in your learning” in five potential categories, and offer them a range of options to characterize whether and how they used AI tools to do the work:
- Idea generation and critical thinking (for example: “I generated all of my ideas independently” or “I collaborated with AI to refine and expand on initial concepts”).
- Research and information (“I utilized AI-powered search tools to find relevant information” or “I used AI-summarized articles but drew my own conclusions”).
- Planning and organization (“I organized and structured my assignment on my own” or “I started with an AI-generated outline and developed it with my own insights”).
- Content development (“I wrote all content without AI assistance” or “I expanded on AI-generated paragraphs with my own knowledge and creativity”).
- Editing and refinement (“I edited and refined my work independently” or “I critically evaluated AI-suggested rewrites and selectively implemented them”).
Then the template lays out the prompt — “AI might have helped you learn in this process, or it may have hindered it. Take some time to answer some of the questions below that speak to your experience using AI.” — and poses some questions (tied to my learning outcomes) to help students write a short reflection about their usage of this emerging technology. Among the questions I list: What tricky situations arose when you used AI? How did you chart a path through them? Did bouncing ideas off AI spark your creativity? Were there any new exciting directions it led you toward, or did you wind up preferring your own insights independent of using AI? Which of your skills got a real workout from using AI? How do you feel you’ve improved?
Giving students the opportunity to think critically and openly about their AI usage lays bare some uncomfortable truths for both students and teachers. It can lead both parties to question their assumptions and be surprised by what they find. Faculty members may discover that students actually learned something using AI; conversely, students might realize that their use of these tools meant they didn’t learn much of anything at all. At the very least, asking students to disclose how they used AI on an assignment means you, as their instructor, will spend less time staring into tea leaves trying to discern if they did.
But, you may be wondering, won’t some students just use ChatGPT to write this assessment, too? Sure. But in my experience, most undergraduates are eager for mechanisms to show how they used AI tools. They want to incorporate AI into their assignments yet make it clear they still used their own thoughts. As faculty members, our best bet is to teach ethical usage and set baseline expectations without adopting intrusive and often unreliable surveillance.
Pre-ChatGPT, several of us tested three other AI tools (Elicit, Fermat, and Wordtune) in the writing-and-rhetoric department’s courses at the University of Mississippi. We published our findings in a March 2024 article on “Generative AI in First-Year Writing.” For our study, we evaluated students’ written comments about how they had used those three tools in their class work. Among our findings:
- Students did, indeed, learn when they used AI tools in their writing process. The catch: Their learning was limited to short interactions with AI in structured assignments — and not with uncritical adoption of the tools.
- Students identified the benefits afforded by the technology in exploring counterarguments, shaping research questions, restructuring sentences, and getting instant feedback. However they were also aware of its limitations: For example, many students chose not to work with large chunks of generative texts because it did not sound like them, preferring their own writing instead.
- They didn’t just learn how to prompt a chatbox. By being asked to critically evaluate their use of these tools, and balance the speed of the technology with this required pause for reflection, students had to reaffirm with their own words the point of why they were in the classroom — to learn.
When you require students to disclose the role of AI as a routine part of an assignment, you also open up the avenue for students to realize that the tool may not actually have helped them. In our culture, we’ve become so accustomed to viewing failure as a bad thing that young learners avoid taking risks. But requiring open disclosure sends the message that it’s OK for them to try something new, and not succeed at it.
Mind you, it has only been 22 months since the public release of ChatGPT. We’re still grappling with the implications of generative tools and what they mean for students. We often learn the most about ourselves through failure. Let’s give students that same opportunity with AI.
What’s the alternative? If professors don’t advocate for such open disclosure in our new generative era, we risk offloading the task to a new wave of AI-detection tools that surveil a student’s entire writing process. Grammerly’s new Authorship tool lets students track their own writing process, capturing every move they make in a Google doc. Flint uses linguistic fingerprinting and stylometry to compare student writing against a baseline sample. Google will begin watermarking generated text with SynthID. All of those methods supposedly show that AI was used. But none of them require students to think critically about what they learned when using the technology.
And using a tool to track your students’ writing only adds another layer of technology to attempt to solve a technology-created problem. You’re relying on a machine to try and validate whether or not a human wrote something. Personally, I’m not keen to participate in surveillance capitalism.
That’s why I recommend that faculty members shift focus away from technology as a solution and invest in human capital — i.e., us. Find ways for your students to openly disclose their use of AI tools and to demonstrate what they’ve learned when using the technology.
Our students aren’t mere content creators, and asking them to reflect on their usage of AI can help guide them to become more self-aware learners and writers. This approach may be key in establishing the badly needed AI literacy that your students will need in years ahead, while also preserving the irreplaceable value of human-centered education.