Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    AI and Microcredentials
Sign In
Advice

Why We Should Normalize Open Disclosure of AI Use

It’s time we reclaim faculty-student trust through clear advocacy — not opaque surveillance.

By Marc Watkins August 14, 2024
illustration of a robot hand holding a pencil against concentric circles
Illustration by The Chronicle

The start of another fall semester approaches and wary eyes once again turn to course policies about the use of generative AI. For a lot of faculty members, the last two years have been marked by increasing frustration at the lack of clear guidance from their institutions about AI use in the classroom. Many colleges have opted against setting an official AI policy, leaving it to each instructor to decide how to integrate — or resist — these tools in their teaching.

From a student’s perspective, enrolling in four or five courses could mean encountering an equal number of different stances on AI use in coursework. Let’s pause for a moment and take the issue out of the realm of syllabus-policy jargon and focus instead on a very simple question:

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

The start of another fall semester approaches and wary eyes turn once again to course policies about the use of generative AI. For a lot of faculty members, the last two years have been marked by increasing frustration at the lack of clear guidance from their institutions about AI use in the classroom. Many colleges have opted against setting an official AI policy, leaving it to each instructor to decide how to integrate — or resist — these tools in their teaching.

From a student’s perspective, enrolling in four or five courses could mean encountering an equal number of different stances on AI use in coursework. Let’s pause for a moment and take the issue out of the realm of syllabus-policy jargon and focus instead on a very simple question:

Should students — and faculty members and administrators, for that matter — be open about using generative AI in higher education?

Since ChatGPT was released, we’ve searched for a lodestar to help us deal with the impact of generative AI on teaching. I don’t think that’s going to come from a hodgepodge of institutional and personal policies that vary from one college to the next and even from one classroom to another. Many discussions on this topic flounder because we lack clear standards for AI use. Students, meanwhile, are eager to learn the standards so they can use the technology ethically.

We must start somewhere, and I think we should begin by (a) requiring people to openly disclose their use of these tools, and (b) providing them with a consistent means of showing it. In short, we should normalize disclosing work that has been produced with the aid of AI.

Calling for open disclosure and a standardized label doesn’t mean faculty members couldn’t still ban the use of AI tools in their classrooms. In my own classroom, there are plenty of areas in which I make clear to my students that using generative AI will be unhelpful to their learning and could cross into academic misconduct.

Rather, open disclosure becomes a bedrock principle, a point zero, for a student, teacher, or administrator who uses a generative AI tool.

It’s crucial to establish clear expectations now because this technology is moving beyond models of language. Very soon, tools like ChatGPT will have multimodal features that can mimic human speech and vision. That might seem like science fiction, but OpenAI’s demo of its new GPT-4o voice and vision features means it will soon be a reality in our classrooms.

The latest AI models mimic human interaction in ways that make text generation feel like an 8-bit video game. Generative tools like Hume.ai’s Empathetic Voice Interface can detect subtle emotional shifts in your voice and predict if you are sad, happy, anxious, or even sarcastic. As scary as that sounds, it pales in comparison to HeyGen’s AI avatars that let users upload digital replicas of their voices, mannerisms, and bodies.

ADVERTISEMENT

Multimodal AI presents new challenges and opportunities that we haven’t begun to explore, and that’s more reason to normalize the expectation that all of us openly acknowledge when we use this technology in our work.

The majority of faculty members will soon have generative tools built into our college’s learning-management system, with little guidance about how to use them. Blackboard’s AI Design Assistant has been on the market for the past year in Ultra courses, and Canvas will soon roll out AI features.

If we expect students to be open about when they use AI, then we should be open when we use it, too. Some professors already use AI tools in instructional design — for example, to draft the initial wording of a syllabus policy or the instructions for an assignment. Labeling such usage where students will see it is an opportunity to model the type of ethical behavior we expect from them. It also provides them with a framework that openly acknowledges how the technology was employed.

What, exactly, would such disclosure labels look like? Here are two examples a user could place at the beginning of a document or project:

  • A template: “AI Usage Disclosure: This document was created with assistance from AI tools. The content has been reviewed and edited by a human. For more information on the extent and nature of AI usage, please contact the author.”
  • Or with more specifics: “AI Usage Disclosure: This document [include title] was created with assistance from [specify the AI tool]. The content can be viewed here [add link] and has been reviewed and edited by [author’s full name]. For more information on the extent and nature of AI usage, please contact the author.”

Creating a label is simple. Getting everyone to agree to actually use it — to openly acknowledge that a paper or project was produced with an AI tool — will be far more challenging.

ADVERTISEMENT

For starters, we must view the technology as more than a cheating tool. That’s a hard ask for many faculty members. Students use AI because it saves them time and offers the potential of a frictionless educational experience. Social media abounds with influencer profiles hawking generative tools aimed at students with promises to let AI study for them, listen during lectures, and even read for them.

Most students aren’t aware of what generative AI is beyond ChatGPT. And it is increasingly hard to have frank and honest discussions with them about this emerging technology if we frame the conversation solely in terms of academic misconduct. As faculty members, we want our students to examine generative AI with a more critical eye — to question the reliability, value, and efficacy of its outputs. But to do that, we have to move beyond searching their papers for evidence of AI misuse and instead look for evidence of learning with this technology. That happens only if we normalize the practice of AI disclosure.

Professional societies — such as the Modern Language Association and the American Psychological Association, among others — have released guidance for scholars about how to properly cite the use of generative AI in faculty work. But I’m not advocating for treating the tool as a source.

Rather, I’m asking every higher-ed institution to consider normalizing AI disclosure as a means of curbing the uncritical adoption of AI and restoring the trust between professors and students. Unreliable AI detection has led to false accusations, with little recourse for the accused students to prove their words were indeed their own and not from an algorithm.

ADVERTISEMENT

We cannot continue to guess if the words we read come from a student or a bot. Likewise, students should never have to guess if an assignment we hand out was generated in ChatGPT or written by us. It’s time we reclaim this trust through advocacy — not opaque surveillance. It’s time to make clear that everyone on the campus is expected to openly disclose when they’ve used generative AI in something they have written, designed, or created.

Teaching is all about trust, which is difficult to restore once it has been lost. Many faculty members will question trusting their students to openly disclose their use of AI, based on prior experience. And yet our students will have to put similar trust in us that we will not punish them for disclosing their AI usage, even when many of them have been wrongly accused of misusing AI in the past.

Open disclosure is a reset, an opportunity to start over. It is a means for us to reclaim some agency in the dizzying pace of AI deployments by creating a standard of conduct. If we ridicule students for using generative AI openly by grading them differently, questioning their intelligence, or presenting other biases, we risk students hiding their use of AI. Instead, we should be advocating that they show us what they learned from using it. Let’s embrace this opportunity to redefine trust, transparency, and learning in the age of AI.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Teaching & Learning Technology
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
Marc Watkins
Marc Watkins is assistant director of academic innovation at the University of Mississippi, where he directs the AI institute for teachers. He writes regularly about generative AI in education on his newsletter Rhetorica.
ADVERTISEMENT
ADVERTISEMENT

More News

Photo illustration showing internal email text snippets over a photo of a University of Iowa campus quad
Red-state reticence
Facing Research Cuts, Officials at U. of Iowa Spoke of a ‘Limited Ability to Publicly Fight This’
Photo illustration showing Santa Ono seated, places small in the corner of a dark space
'Unrelentingly Sad'
Santa Ono Wanted a Presidency. He Became a Pariah.
Illustration of a rushing crowd carrying HSI letters
Seeking precedent
Funding for Hispanic-Serving Institutions Is Discriminatory and Unconstitutional, Lawsuit Argues
Photo-based illustration of scissors cutting through paper that is a photo of an idyllic liberal arts college campus on one side and money on the other
Finance
Small Colleges Are Banding Together Against a Higher Endowment Tax. This Is Why.

From The Review

Football game between UCLA and Colorado University, at Folsom Field in Boulder, Colo., Sept. 24, 2022.
The Review | Opinion
My University Values Football More Than Education
By Sigman Byrd
Photo- and type-based illustration depicting the acronym AAUP with the second A as the arrow of a compass and facing not north but southeast.
The Review | Essay
The Unraveling of the AAUP
By Matthew W. Finkin
Photo-based illustration of the Capitol building dome propped on a stick attached to a string, like a trap.
The Review | Opinion
Colleges Can’t Trust the Federal Government. What Now?
By Brian Rosenberg

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: a Global Leadership Perspective
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin