Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Events
    • Virtual Events
    • Chronicle On-The-Road
    • Professional Development
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    College Advising
    Serving Higher Ed
    Chronicle Festival 2025
Sign In
Advice

Why We Should Normalize Open Disclosure of AI Use

It’s time we reclaim faculty-student trust through clear advocacy — not opaque surveillance.

By Marc Watkins August 14, 2024
illustration of a robot hand holding a pencil against concentric circles
Illustration by The Chronicle

The start of another fall semester approaches and wary eyes once again turn to course policies about the use of generative AI. For a lot of faculty members, the last two years have been marked by increasing frustration at the lack of clear guidance from their institutions about AI use in the classroom. Many colleges have opted against setting an official AI policy, leaving it to each instructor to decide how to integrate — or resist — these tools in their teaching.

From a student’s perspective, enrolling in four or five courses could mean encountering an equal number of different stances on AI use in coursework. Let’s pause for a moment and take the issue out of the realm of syllabus-policy jargon and focus instead on a very simple question:

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

The start of another fall semester approaches and wary eyes turn once again to course policies about the use of generative AI. For a lot of faculty members, the last two years have been marked by increasing frustration at the lack of clear guidance from their institutions about AI use in the classroom. Many colleges have opted against setting an official AI policy, leaving it to each instructor to decide how to integrate — or resist — these tools in their teaching.

From a student’s perspective, enrolling in four or five courses could mean encountering an equal number of different stances on AI use in coursework. Let’s pause for a moment and take the issue out of the realm of syllabus-policy jargon and focus instead on a very simple question:

Should students — and faculty members and administrators, for that matter — be open about using generative AI in higher education?

Since ChatGPT was released, we’ve searched for a lodestar to help us deal with the impact of generative AI on teaching. I don’t think that’s going to come from a hodgepodge of institutional and personal policies that vary from one college to the next and even from one classroom to another. Many discussions on this topic flounder because we lack clear standards for AI use. Students, meanwhile, are eager to learn the standards so they can use the technology ethically.

We must start somewhere, and I think we should begin by (a) requiring people to openly disclose their use of these tools, and (b) providing them with a consistent means of showing it. In short, we should normalize disclosing work that has been produced with the aid of AI.

Calling for open disclosure and a standardized label doesn’t mean faculty members couldn’t still ban the use of AI tools in their classrooms. In my own classroom, there are plenty of areas in which I make clear to my students that using generative AI will be unhelpful to their learning and could cross into academic misconduct.

Rather, open disclosure becomes a bedrock principle, a point zero, for a student, teacher, or administrator who uses a generative AI tool.

It’s crucial to establish clear expectations now because this technology is moving beyond models of language. Very soon, tools like ChatGPT will have multimodal features that can mimic human speech and vision. That might seem like science fiction, but OpenAI’s demo of its new GPT-4o voice and vision features means it will soon be a reality in our classrooms.

The latest AI models mimic human interaction in ways that make text generation feel like an 8-bit video game. Generative tools like Hume.ai’s Empathetic Voice Interface can detect subtle emotional shifts in your voice and predict if you are sad, happy, anxious, or even sarcastic. As scary as that sounds, it pales in comparison to HeyGen’s AI avatars that let users upload digital replicas of their voices, mannerisms, and bodies.

ADVERTISEMENT

Multimodal AI presents new challenges and opportunities that we haven’t begun to explore, and that’s more reason to normalize the expectation that all of us openly acknowledge when we use this technology in our work.

The majority of faculty members will soon have generative tools built into our college’s learning-management system, with little guidance about how to use them. Blackboard’s AI Design Assistant has been on the market for the past year in Ultra courses, and Canvas will soon roll out AI features.

If we expect students to be open about when they use AI, then we should be open when we use it, too. Some professors already use AI tools in instructional design — for example, to draft the initial wording of a syllabus policy or the instructions for an assignment. Labeling such usage where students will see it is an opportunity to model the type of ethical behavior we expect from them. It also provides them with a framework that openly acknowledges how the technology was employed.

What, exactly, would such disclosure labels look like? Here are two examples a user could place at the beginning of a document or project:

  • A template: “AI Usage Disclosure: This document was created with assistance from AI tools. The content has been reviewed and edited by a human. For more information on the extent and nature of AI usage, please contact the author.”
  • Or with more specifics: “AI Usage Disclosure: This document [include title] was created with assistance from [specify the AI tool]. The content can be viewed here [add link] and has been reviewed and edited by [author’s full name]. For more information on the extent and nature of AI usage, please contact the author.”

Creating a label is simple. Getting everyone to agree to actually use it — to openly acknowledge that a paper or project was produced with an AI tool — will be far more challenging.

ADVERTISEMENT

For starters, we must view the technology as more than a cheating tool. That’s a hard ask for many faculty members. Students use AI because it saves them time and offers the potential of a frictionless educational experience. Social media abounds with influencer profiles hawking generative tools aimed at students with promises to let AI study for them, listen during lectures, and even read for them.

Most students aren’t aware of what generative AI is beyond ChatGPT. And it is increasingly hard to have frank and honest discussions with them about this emerging technology if we frame the conversation solely in terms of academic misconduct. As faculty members, we want our students to examine generative AI with a more critical eye — to question the reliability, value, and efficacy of its outputs. But to do that, we have to move beyond searching their papers for evidence of AI misuse and instead look for evidence of learning with this technology. That happens only if we normalize the practice of AI disclosure.

Professional societies — such as the Modern Language Association and the American Psychological Association, among others — have released guidance for scholars about how to properly cite the use of generative AI in faculty work. But I’m not advocating for treating the tool as a source.

Rather, I’m asking every higher-ed institution to consider normalizing AI disclosure as a means of curbing the uncritical adoption of AI and restoring the trust between professors and students. Unreliable AI detection has led to false accusations, with little recourse for the accused students to prove their words were indeed their own and not from an algorithm.

ADVERTISEMENT

We cannot continue to guess if the words we read come from a student or a bot. Likewise, students should never have to guess if an assignment we hand out was generated in ChatGPT or written by us. It’s time we reclaim this trust through advocacy — not opaque surveillance. It’s time to make clear that everyone on the campus is expected to openly disclose when they’ve used generative AI in something they have written, designed, or created.

Teaching is all about trust, which is difficult to restore once it has been lost. Many faculty members will question trusting their students to openly disclose their use of AI, based on prior experience. And yet our students will have to put similar trust in us that we will not punish them for disclosing their AI usage, even when many of them have been wrongly accused of misusing AI in the past.

Open disclosure is a reset, an opportunity to start over. It is a means for us to reclaim some agency in the dizzying pace of AI deployments by creating a standard of conduct. If we ridicule students for using generative AI openly by grading them differently, questioning their intelligence, or presenting other biases, we risk students hiding their use of AI. Instead, we should be advocating that they show us what they learned from using it. Let’s embrace this opportunity to redefine trust, transparency, and learning in the age of AI.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Teaching & Learning Technology
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
Marc Watkins
Marc Watkins is assistant director of academic innovation at the University of Mississippi, where he directs the AI institute for teachers. He writes regularly about generative AI in education on his newsletter Rhetorica.
ADVERTISEMENT
ADVERTISEMENT

More News

Vector illustration of large open scissors  with several workers in seats dangling by white lines
Iced Out
Duke Administrators Accused of Bypassing Shared-Governance Process in Offering Buyouts
Illustration showing money being funnelled into the top of a microscope.
'A New Era'
Higher-Ed Associations Pitch an Alternative to Trump’s Cap on Research Funding
Illustration showing classical columns of various heights, each turning into a stack of coins
Endowment funds
The Nation’s Wealthiest Small Colleges Just Won a Big Tax Exemption
WASHINGTON, DISTICT OF COLUMBIA, UNITED STATES - 2025/04/14: A Pro-Palestinian demonstrator holding a sign with Release Mahmud Khalil written on it, stands in front of the ICE building while joining in a protest. Pro-Palestinian demonstrators rally in front of the ICE building, demanding freedom for Mahmoud Khalil and all those targeted for speaking out against genocide in Palestine. Protesters demand an end to U.S. complicity and solidarity with the resistance in Gaza. (Photo by Probal Rashid/LightRocket via Getty Images)
Campus Activism
An Anonymous Group’s List of Purported Critics of Israel Helped Steer a U.S. Crackdown on Student Activists

From The Review

John T. Scopes as he stood before the judges stand and was sentenced, July 2025.
The Review | Essay
100 Years Ago, the Scopes Monkey Trial Discovered Academic Freedom
By John K. Wilson
Vector illustration of a suited man with a pair of scissors for a tie and an American flag button on his lapel.
The Review | Opinion
A Damaging Endowment Tax Crosses the Finish Line
By Phillip Levine
University of Virginia President Jim Ryan keeps his emotions in check during a news conference, Monday, Nov. 14, 2022 in Charlottesville. Va. Authorities say three people have been killed and two others were wounded in a shooting at the University of Virginia and a student is in custody. (AP Photo/Steve Helber)
The Review | Opinion
Jim Ryan’s Resignation Is a Warning
By Robert Zaretsky

Upcoming Events

07-31-Turbulent-Workday_assets v2_Plain.png
Keeping Your Institution Moving Forward in Turbulent Times
Ascendium_Housing_Plain.png
What It Really Takes to Serve Students’ Basic Needs: Housing
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin