Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    College Advising
    Serving Higher Ed
    Chronicle Festival 2025
Sign In
Advice

Make AI Part of the Assignment

Learning requires friction. Here’s how to get students to disclose and evaluate their own usage of tools like ChatGPT.

By Marc Watkins October 2, 2024
Chatbot FC(A)
Harry Campbell for The Chronicle

Open your favorite social-media platform and you’ll see dozens of threads from faculty members in anguish over students using ChatGPT to cheat. Generative AI tools aren’t going away, and neither is the discourse that using them is academically dishonest. But beyond that issue is another worth considering: What can our students learn about writing — and their own writing process — through the open use of generative AI in the college classroom?

In Ted Chiang’s recent essay in The New Yorker, “Why A.I. Isn’t Going to Make Art

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Open your favorite social-media platform and you’ll see dozens of threads from faculty members in anguish over students using ChatGPT to cheat. Generative AI tools aren’t going away, and neither is the discourse that using them is academically dishonest. But beyond that issue is another worth considering: What can our students learn about writing — and their own writing process — through the open use of generative AI in the college classroom?

In Ted Chiang’s recent essay in The New Yorker, “Why AI Isn’t Going to Make Art,” he aptly describes students using AI to avoid learning and the dire effect that has on their skills development: “Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.”

Learning requires friction, resistance, and even failure. Some three decades ago, Robert A. Bjork, a psychologist at the University of California at Los Angeles, coined the term “desirable difficulty” to describe the benefits that students get from doing increasingly challenging tasks to enhance their learning. ChatGPT removes many of those desirable difficulties by offering the user a frictionless experience: Prompt AI with a question, and get an instant answer. The student’s brain offloads everything to an algorithm.

Given that reality, how can you as a faculty member respond? In my first column, “Why We Should Normalize Open Disclosure of AI Use,” I noted that students are eager for standards because they want to use the technology openly and ethically. So the first step in responding is to set those standards in your own courses, and “normalize” disclosure of AI usage.

Here I will focus on the second step: how to introduce a bit of intentional friction into your students’ use of AI and find ways for them to demonstrate their learning when using the technology. Educators including Leon Furze, Katie Conrade, and Jane Rosenzweig have all written about the need to keep friction as a feature of the college classroom and not let generative tools automate learning.

In my own courses and as director of an AI institute for instructors at my university, I’ve adopted and suggested this method: As part of the assignment, require students to critically evaluate how they used the technology and how it affected their writing process. That way, they aren’t just passively relying on AI-generated content but meaningfully assessing its role in their writing.

Usage of a tool like ChatGPT often obscures the most critical aspects of a student’s writing process, leaving the instructor uncertain about which skills were used. So I created a form — the AI-Assisted Learning Template — to guide students in evaluating their own AI use on a particular assignment.

On the template, I first ask students to “highlight how you used human and machine skills in your learning” in five potential categories, and offer them a range of options to characterize whether and how they used AI tools to do the work:

  • Idea generation and critical thinking (for example: “I generated all of my ideas independently” or “I collaborated with AI to refine and expand on initial concepts”).
  • Research and information (“I utilized AI-powered search tools to find relevant information” or “I used AI-summarized articles but drew my own conclusions”).
  • Planning and organization (“I organized and structured my assignment on my own” or “I started with an AI-generated outline and developed it with my own insights”).
  • Content development (“I wrote all content without AI assistance” or “I expanded on AI-generated paragraphs with my own knowledge and creativity”).
  • Editing and refinement (“I edited and refined my work independently” or “I critically evaluated AI-suggested rewrites and selectively implemented them”).

Then the template lays out the prompt — “AI might have helped you learn in this process, or it may have hindered it. Take some time to answer some of the questions below that speak to your experience using AI.” — and poses some questions (tied to my learning outcomes) to help students write a short reflection about their usage of this emerging technology. Among the questions I list: What tricky situations arose when you used AI? How did you chart a path through them? Did bouncing ideas off AI spark your creativity? Were there any new exciting directions it led you toward, or did you wind up preferring your own insights independent of using AI? Which of your skills got a real workout from using AI? How do you feel you’ve improved?

ADVERTISEMENT

Giving students the opportunity to think critically and openly about their AI usage lays bare some uncomfortable truths for both students and teachers. It can lead both parties to question their assumptions and be surprised by what they find. Faculty members may discover that students actually learned something using AI; conversely, students might realize that their use of these tools meant they didn’t learn much of anything at all. At the very least, asking students to disclose how they used AI on an assignment means you, as their instructor, will spend less time staring into tea leaves trying to discern if they did.

But, you may be wondering, won’t some students just use ChatGPT to write this assessment, too? Sure. But in my experience, most undergraduates are eager for mechanisms to show how they used AI tools. They want to incorporate AI into their assignments yet make it clear they still used their own thoughts. As faculty members, our best bet is to teach ethical usage and set baseline expectations without adopting intrusive and often unreliable surveillance.

Pre-ChatGPT, several of us tested three other AI tools (Elicit, Fermat, and Wordtune) in the writing-and-rhetoric department’s courses at the University of Mississippi. We published our findings in a March 2024 article on “Generative AI in First-Year Writing.” For our study, we evaluated students’ written comments about how they had used those three tools in their class work. Among our findings:

  • Students did, indeed, learn when they used AI tools in their writing process. The catch: Their learning was limited to short interactions with AI in structured assignments — and not with uncritical adoption of the tools.
  • Students identified the benefits afforded by the technology in exploring counterarguments, shaping research questions, restructuring sentences, and getting instant feedback. However they were also aware of its limitations: For example, many students chose not to work with large chunks of generative texts because it did not sound like them, preferring their own writing instead.
  • They didn’t just learn how to prompt a chatbox. By being asked to critically evaluate their use of these tools, and balance the speed of the technology with this required pause for reflection, students had to reaffirm with their own words the point of why they were in the classroom — to learn.

When you require students to disclose the role of AI as a routine part of an assignment, you also open up the avenue for students to realize that the tool may not actually have helped them. In our culture, we’ve become so accustomed to viewing failure as a bad thing that young learners avoid taking risks. But requiring open disclosure sends the message that it’s OK for them to try something new, and not succeed at it.

ADVERTISEMENT

Mind you, it has only been 22 months since the public release of ChatGPT. We’re still grappling with the implications of generative tools and what they mean for students. We often learn the most about ourselves through failure. Let’s give students that same opportunity with AI.

What’s the alternative? If professors don’t advocate for such open disclosure in our new generative era, we risk offloading the task to a new wave of AI-detection tools that surveil a student’s entire writing process. Grammerly’s new Authorship tool lets students track their own writing process, capturing every move they make in a Google doc. Flint uses linguistic fingerprinting and stylometry to compare student writing against a baseline sample. Google will begin watermarking generated text with SynthID. All of those methods supposedly show that AI was used. But none of them require students to think critically about what they learned when using the technology.

And using a tool to track your students’ writing only adds another layer of technology to attempt to solve a technology-created problem. You’re relying on a machine to try and validate whether or not a human wrote something. Personally, I’m not keen to participate in surveillance capitalism.

That’s why I recommend that faculty members shift focus away from technology as a solution and invest in human capital — i.e., us. Find ways for your students to openly disclose their use of AI tools and to demonstrate what they’ve learned when using the technology.

Our students aren’t mere content creators, and asking them to reflect on their usage of AI can help guide them to become more self-aware learners and writers. This approach may be key in establishing the badly needed AI literacy that your students will need in years ahead, while also preserving the irreplaceable value of human-centered education.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Teaching & Learning Technology Student Success
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
Marc Watkins
Marc Watkins is assistant director of academic innovation at the University of Mississippi, where he directs the AI institute for teachers. He writes regularly about generative AI in education on his newsletter Rhetorica.
ADVERTISEMENT
ADVERTISEMENT

More News

Vector illustration of large open scissors  with several workers in seats dangling by white lines
Iced Out
Duke Administrators Accused of Bypassing Shared-Governance Process in Offering Buyouts
Illustration showing money being funnelled into the top of a microscope.
'A New Era'
Higher-Ed Associations Pitch an Alternative to Trump’s Cap on Research Funding
Illustration showing classical columns of various heights, each turning into a stack of coins
Endowment funds
The Nation’s Wealthiest Small Colleges Just Won a Big Tax Exemption
WASHINGTON, DISTICT OF COLUMBIA, UNITED STATES - 2025/04/14: A Pro-Palestinian demonstrator holding a sign with Release Mahmud Khalil written on it, stands in front of the ICE building while joining in a protest. Pro-Palestinian demonstrators rally in front of the ICE building, demanding freedom for Mahmoud Khalil and all those targeted for speaking out against genocide in Palestine. Protesters demand an end to U.S. complicity and solidarity with the resistance in Gaza. (Photo by Probal Rashid/LightRocket via Getty Images)
Campus Activism
An Anonymous Group’s List of Purported Critics of Israel Helped Steer a U.S. Crackdown on Student Activists

From The Review

John T. Scopes as he stood before the judges stand and was sentenced, July 2025.
The Review | Essay
100 Years Ago, the Scopes Monkey Trial Discovered Academic Freedom
By John K. Wilson
Vector illustration of a suited man with a pair of scissors for a tie and an American flag button on his lapel.
The Review | Opinion
A Damaging Endowment Tax Crosses the Finish Line
By Phillip Levine
University of Virginia President Jim Ryan keeps his emotions in check during a news conference, Monday, Nov. 14, 2022 in Charlottesville. Va. Authorities say three people have been killed and two others were wounded in a shooting at the University of Virginia and a student is in custody. (AP Photo/Steve Helber)
The Review | Opinion
Jim Ryan’s Resignation Is a Warning
By Robert Zaretsky

Upcoming Events

07-31-Turbulent-Workday_assets v2_Plain.png
Keeping Your Institution Moving Forward in Turbulent Times
Ascendium_Housing_Plain.png
What It Really Takes to Serve Students’ Basic Needs: Housing
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin