The start of another fall semester approaches and wary eyes turn once again to course policies about the use of generative AI. For a lot of faculty members, the last two years have been marked by increasing frustration at the lack of clear guidance from their institutions about AI use in the classroom. Many colleges have opted against setting an official AI policy, leaving it to each instructor to decide how to integrate — or resist — these tools in their teaching.
From a student’s perspective, enrolling in four or five courses could mean encountering an equal number of different stances on AI use in coursework. Let’s pause for a moment and take the issue out of the realm of syllabus-policy jargon and focus instead on a very simple question:
Should students — and faculty members and administrators, for that matter — be open about using generative AI in higher education?
Since ChatGPT was released, we’ve searched for a lodestar to help us deal with the impact of generative AI on teaching. I don’t think that’s going to come from a hodgepodge of institutional and personal policies that vary from one college to the next and even from one classroom to another. Many discussions on this topic flounder because we lack clear standards for AI use. Students, meanwhile, are eager to learn the standards so they can use the technology ethically.
We must start somewhere, and I think we should begin by (a) requiring people to openly disclose their use of these tools, and (b) providing them with a consistent means of showing it. In short, we should normalize disclosing work that has been produced with the aid of AI.
Calling for open disclosure and a standardized label doesn’t mean faculty members couldn’t still ban the use of AI tools in their classrooms. In my own classroom, there are plenty of areas in which I make clear to my students that using generative AI will be unhelpful to their learning and could cross into academic misconduct.
Rather, open disclosure becomes a bedrock principle, a point zero, for a student, teacher, or administrator who uses a generative AI tool.
It’s crucial to establish clear expectations now because this technology is moving beyond models of language. Very soon, tools like ChatGPT will have multimodal features that can mimic human speech and vision. That might seem like science fiction, but OpenAI’s demo of its new GPT-4o voice and vision features means it will soon be a reality in our classrooms.
The latest AI models mimic human interaction in ways that make text generation feel like an 8-bit video game. Generative tools like Hume.ai’s Empathetic Voice Interface can detect subtle emotional shifts in your voice and predict if you are sad, happy, anxious, or even sarcastic. As scary as that sounds, it pales in comparison to HeyGen’s AI avatars that let users upload digital replicas of their voices, mannerisms, and bodies.
Multimodal AI presents new challenges and opportunities that we haven’t begun to explore, and that’s more reason to normalize the expectation that all of us openly acknowledge when we use this technology in our work.
The majority of faculty members will soon have generative tools built into our college’s learning-management system, with little guidance about how to use them. Blackboard’s AI Design Assistant has been on the market for the past year in Ultra courses, and Canvas will soon roll out AI features.
If we expect students to be open about when they use AI, then we should be open when we use it, too. Some professors already use AI tools in instructional design — for example, to draft the initial wording of a syllabus policy or the instructions for an assignment. Labeling such usage where students will see it is an opportunity to model the type of ethical behavior we expect from them. It also provides them with a framework that openly acknowledges how the technology was employed.
What, exactly, would such disclosure labels look like? Here are two examples a user could place at the beginning of a document or project:
- A template: “AI Usage Disclosure: This document was created with assistance from AI tools. The content has been reviewed and edited by a human. For more information on the extent and nature of AI usage, please contact the author.”
- Or with more specifics: “AI Usage Disclosure: This document [include title] was created with assistance from [specify the AI tool]. The content can be viewed here [add link] and has been reviewed and edited by [author’s full name]. For more information on the extent and nature of AI usage, please contact the author.”
Creating a label is simple. Getting everyone to agree to actually use it — to openly acknowledge that a paper or project was produced with an AI tool — will be far more challenging.
For starters, we must view the technology as more than a cheating tool. That’s a hard ask for many faculty members. Students use AI because it saves them time and offers the potential of a frictionless educational experience. Social media abounds with influencer profiles hawking generative tools aimed at students with promises to let AI study for them, listen during lectures, and even read for them.
Most students aren’t aware of what generative AI is beyond ChatGPT. And it is increasingly hard to have frank and honest discussions with them about this emerging technology if we frame the conversation solely in terms of academic misconduct. As faculty members, we want our students to examine generative AI with a more critical eye — to question the reliability, value, and efficacy of its outputs. But to do that, we have to move beyond searching their papers for evidence of AI misuse and instead look for evidence of learning with this technology. That happens only if we normalize the practice of AI disclosure.
Professional societies — such as the Modern Language Association and the American Psychological Association, among others — have released guidance for scholars about how to properly cite the use of generative AI in faculty work. But I’m not advocating for treating the tool as a source.
Rather, I’m asking every higher-ed institution to consider normalizing AI disclosure as a means of curbing the uncritical adoption of AI and restoring the trust between professors and students. Unreliable AI detection has led to false accusations, with little recourse for the accused students to prove their words were indeed their own and not from an algorithm.
We cannot continue to guess if the words we read come from a student or a bot. Likewise, students should never have to guess if an assignment we hand out was generated in ChatGPT or written by us. It’s time we reclaim this trust through advocacy — not opaque surveillance. It’s time to make clear that everyone on the campus is expected to openly disclose when they’ve used generative AI in something they have written, designed, or created.
Teaching is all about trust, which is difficult to restore once it has been lost. Many faculty members will question trusting their students to openly disclose their use of AI, based on prior experience. And yet our students will have to put similar trust in us that we will not punish them for disclosing their AI usage, even when many of them have been wrongly accused of misusing AI in the past.
Open disclosure is a reset, an opportunity to start over. It is a means for us to reclaim some agency in the dizzying pace of AI deployments by creating a standard of conduct. If we ridicule students for using generative AI openly by grading them differently, questioning their intelligence, or presenting other biases, we risk students hiding their use of AI. Instead, we should be advocating that they show us what they learned from using it. Let’s embrace this opportunity to redefine trust, transparency, and learning in the age of AI.