Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    AI and Microcredentials
Sign In
Advice

4 Steps to Help You Plan for ChatGPT in Your Classroom

Why you should understand how to teach with AI tools — even if you have no plans to actually use them.

By Flower Darby June 27, 2023
illustration of a robot hand writing, sitting on top of a complicated machine
Harry Campbell for The Chronicle

The advent of ChatGPT and other generative AI tools is akin to the seismic shifts we experienced when the internet was born or when smartphones became part of our everyday lives. Those inventions initially bewildered and concerned us, but ended up changing the way we do lots of things — everything? — at work, home, and all spaces in between. So it will be with the new AI.

That analogy comes from a recent speech by Vinton G. Cerf, vice president and chief internet evangelist at Google. His comment struck a chord, and I’ve been thinking about it ever since. It offers much-needed perspective at a time when a lot of faculty members are once again bewildered and concerned about an invention that promises to radically alter the way we teach and work.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

The advent of ChatGPT and other generative AI tools is akin to the seismic shifts we experienced when the internet was born or when smartphones became part of our everyday lives. Those inventions initially bewildered and concerned us, but they ended up changing the way we do lots of things — everything? — at work, home, and all spaces in between. So it will be with the new AI.

That analogy comes from a recent speech by Vinton G. Cerf, vice president and chief internet evangelist at Google. His comment struck a chord, and I’ve been thinking about it ever since. It offers much-needed perspective at a time when a lot of faculty members are once again bewildered and concerned about an invention that promises to radically alter the way we teach and work.

As an expert in technology-enabled teaching, I’ve spent the past few months absorbing and synthesizing higher ed’s conversations about generative AI. And yes, I predict we will adjust to ChatGPT as we did with the internet and smartphones, undergoing a process of wrapping our heads around AI tools and learning how to integrate them productively into our professional and personal lives.

How we get to that same level of comfort with AI may well be a rocky road. Certainly in recent months we’ve seen plenty of alarming headlines — like this one about the professor who failed all of his students after concluding they used ChatGPT, and this one and this one, about the degree to which students are already cheating with AI. But judging from the comments I see and hear, many faculty members are still in denial or unaware of how these tools might affect their own classrooms.

What follows are four strategies to help you progress through this wrapping-our-heads-around-AI stage. I would encourage every faculty member to learn to teach with ChatGPT and other such tools as soon as this fall. But my advice here is not just for those of you willing to do so — it’s also for those of you who aren’t. Because even if you’d rather not bring AI into your courses, you still need to understand how these tools work and be able to speak knowledgeably about them with students.

Get familiar with generative AI tools. The only way to do that is to use them. Play around. See how they work. Yet I’ve had countless conversations with faculty members who say they haven’t tried these tools yet and seem intent on keeping their heads stuck deep in the sand. I’ve also talked with many students who say their professors didn’t acknowledge the giant elephant-bot in the room this past spring, simply not mentioning ChatGPT or AI at all. I get it: I’ve had moments where I’ve felt stymied by these tools, and tempted to just ignore them. But that attitude won’t serve you or your students very well.

If you don’t know where to start, read “What You Need to Know About ChatGPT.” Recent essays (here and here) on how to adjust your writing assignments to the realities of AI are useful, as is this Google doc on “classroom policies for AI generative tools” and this nuanced piece on how AI could be a case of the “rich getting richer” if we don’t help students learn to use it. Finally, this article on what to do while chatbots “mature,” written by educator and blogger Ray Schroeder, establishes a useful middle ground between panic and denial. As an initial step forward, Schroeder encourages academics to develop a degree of fluency with these tools by testing them out informally.

It’s hard to make careful decisions about how and whether to use something if you have no firsthand knowledge of it. If that’s you, consider starting to use chatbots in your day-to-day life. I started by reminding myself, anytime I was about to Google something, to ask ChatGPT (or more accurately, Google Bard, my current favorite). As you play around, you can start thinking through how you might work with AI in your teaching.

Get ready to talk about it in class. If you’re like many faculty members, you have yet to define your course policies on AI, which is not surprising given how abruptly it burst on the scene. Plenty of “skeptics and fans alike” are struggling to frame their own views on the appropriate, ethical, and responsible use of ChatGPT and other tools, and aren’t fully prepared to talk with students about this topic. But ready or not, you’re going to have to discuss AI with your students in 2023-24.

Be honest. Students can learn from your example of transparency, humility, and willingness to learn.

I’ve come to believe that faculty members have an ethical obligation to help students prepare for the future of work, a future in which AI will undoubtedly feature prominently. Just last week I spoke with a professor whose daughter has to fire three people on her team because AI can do their jobs better. We must help students prepare for an AI-informed workplace. Even if you’re not sure what to think about using these tools in your classes, tell students that. Be honest. Students can learn from your example of transparency, humility, and willingness to learn.

A big part of the conversation has to focus on cheating and plagiarism. A recent talk on academic integrity by Tricia Bertram Gallant, director of the academic-integrity office at the University of California at San Diego, helped me think about how to frame this discussion with students. Cheating isn’t new, and neither is “contract cheating” (paper mills and other schemes to pay someone to do your homework), though the latter seems to be growing by leaps and bounds. One way forward is to emphasize how cheaters are only cheating themselves. In her talk, Gallant described a track coach who would tell runners they could ride a scooter around the track, but that that wouldn’t make them faster or stronger runners. Think about yourself like that coach, she said. Talk with students about the value of doing the work of learning for themselves instead of outsourcing it to a machine.

ADVERTISEMENT

Better yet, coach students on the effective use of AI tools related to classwork. I recently had a conversation with a psychology professor who tells his students: “Use these tools to help you understand challenging passages in assigned readings, or to build preliminary foundational knowledge to help you understand more difficult concepts. Don’t use AI to cheat — use it as a tool to help you learn.” That strikes me as a good tone to take for now.

Further, we have an opportunity to help students become upstanding professionals who demonstrate integrity in their work. In this era of remote and hybrid jobs, working in ways that establish trust with your supervisor and team is more important than ever. Why not use a little class time to discuss integrity as students prepare for their future jobs?

If you suspect students of AI-related cheating, don’t rush to hand out F’s. Among the most common questions that faculty members are asking about AI: What do I do if I suspect a student cheated with ChatGPT? What if they admit it? Or, what if they don’t admit what seems to be a clear case of AI-enabled dishonesty?

If you think someone has submitted work done by a chatbot, Gallant and other cheating experts recommend you do two things first: (1) Carefully analyze their work, and (2) talk with the student about their writing process. Granted, this approach could be challenging and time-consuming in courses with large enrollments (which is why time-pressed instructors are inclined to hand out a failing grade on the assignment and be done with it). But I would still recommend talking with the student(s) you suspect. Request a short Zoom or phone call to ask a few questions about the student’s work: “How did you come up with the idea for your paper?” or “Tell me more about this argument you’ve proposed here.” Gauge whether they fully understand that using generative AI tools to write their paper was unethical.

ADVERTISEMENT

If a student admits wrongdoing, you have options to consider:

  • Report the incident to your institution’s academic-integrity office. Just keep in mind: This solution might involve paperwork and a long administrative process. (Now would be a great time for institutions to streamline these processes in light of AI).
  • Ask the student to resubmit the assignment and show their work. What I mean by “showing their work” is adding comments in a document, explaining their writing process and sources.
  • Ask the student what consequence seems fair to them, and create the next steps together. The idea here: Discuss, don’t accuse. At least not as your first step.

If students do not admit to wrongdoing, and you’re pretty sure they cheated, well, that’s a little harder. This past spring an instructor told me about a graduate student whose writing on an assignment was noticeably unlike their previous prose, with strange errors. When confronted, the student denied using AI to generate the work. As it happens, it was a low-stakes assignment and the instructor decided there was no need to press further. But raising the issue at least opened up a teachable moment. Even if they “get away with it” this time, your intervention may keep them from cheating in your class again. And if the behavior persists, you may have to pursue a formal solution.

Ideally in the months ahead, higher-ed institutions and government agencies will create policies and guidelines on how to deal with cases of chatbot cheating. For now, in these very early days of AI, you’ll just have to follow your instincts. In my view, your best bet is to talk with the student(s) in question and decide how to proceed on a case-by-case basis. And maybe the difficulties of dealing with AI-related cheating will prompt some institutions to rethink those large-sized classes.

ADVERTISEMENT

If you use plagiarism-detection tools, do so with a hefty degree of caution. I don’t recommend policing your classroom to promote academic integrity, as those efforts can be traumatizing for students and can communicate that they don’t belong in your class, thereby widening equity gaps in higher ed.

However, I’m a realist. Plagiarism-detection tools are available (although not nearly as effective as they initially claimed to be) and plenty of academics will use them (some of these tools are now marketing themselves as a solution to AI-enabled cheating). Emily Isaacs, a professor of writing at Montclair State University and executive director of its faculty-advancement office, recently wrote in a 700-member email group on AI in Education: “These detection systems are being used and will continue to be used. We need to think about how they can be used as a tool and make the process open and clear for students.”

I take the same view of such detection tools as I do of online proctoring. We know that these proctoring surveillance systems have “a history of racial bias” and that they disadvantage any students who live with differences related to neurodivergence or to physical or learning disabilities. Likewise, some students have caregiving and work obligations that prevent successful completion of exams while being monitored via webcam. However, we also know that online proctoring might be unavoidable due to accreditation or other requirements.

My recommendation is that you think carefully about the use of AI-detection software, and not simply default to it. Consider other options before automatically concluding that policing students is the only way forward. But if you do decide to use detection software, analyze the results very carefully before accusing students of dishonesty. Better yet, make the results available to students so they can see what’s being flagged and revise accordingly before they submit their final work.

We are in uncharted territory. It’s hard to know how to proceed with teaching in an AI world. But we are smart, resourceful, and we want the best for our students and their learning outcomes. We will find our way. Give it your time — and attention.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Teaching & Learning Technology Disability & Accessibility
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
About the Author
Flower Darby
Flower Darby is an associate director of the Teaching for Learning Center at the University of Missouri at Columbia and co-author of The Norton Guide to Equity-Minded Teaching, published in March 2023. Find her here on LinkedIn.
ADVERTISEMENT
ADVERTISEMENT

More News

Photo illustration showing Santa Ono seated, places small in the corner of a dark space
'Unrelentingly Sad'
Santa Ono Wanted a Presidency. He Became a Pariah.
Illustration of a rushing crowd carrying HSI letters
Seeking precedent
Funding for Hispanic-Serving Institutions Is Discriminatory and Unconstitutional, Lawsuit Argues
Photo-based illustration of scissors cutting through paper that is a photo of an idyllic liberal arts college campus on one side and money on the other
Finance
Small Colleges Are Banding Together Against a Higher Endowment Tax. This Is Why.
Pano Kanelos, founding president of the U. of Austin.
Q&A
One Year In, What Has ‘the Anti-Harvard’ University Accomplished?

From The Review

Photo- and type-based illustration depicting the acronym AAUP with the second A as the arrow of a compass and facing not north but southeast.
The Review | Essay
The Unraveling of the AAUP
By Matthew W. Finkin
Photo-based illustration of the Capitol building dome propped on a stick attached to a string, like a trap.
The Review | Opinion
Colleges Can’t Trust the Federal Government. What Now?
By Brian Rosenberg
Illustration of an unequal sign in black on a white background
The Review | Essay
What Is Replacing DEI? Racism.
By Richard Amesbury

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: a Global Leadership Perspective
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin