I recently met with a professor of medicine who, although reluctant, was ready to test-drive generative AI for the first time. She gave ChatGPT a set of symptoms and asked for a diagnosis, treatment options, and medication dosages. Reading the results, her expression shifted from horror to anger. “This is terrible!” she fumed. “This is completely inaccurate and extremely dangerous information. I’m never using AI again.” As gently as I could, I replied, “But your students are — and will. They need to know not to trust it.”
Aversion continues to be a common response when I work with faculty members on adjusting their teaching to the realities of tools like ChatGPT. As another professor echoed, “I feel guilty when I use AI. If I don’t want my students using it, maybe I shouldn’t, either.”
I try to validate such responses. This technology does, indeed, raise important ethical and educational concerns that should not be easily dismissed. Yet as Marc Watkins recently wrote: “Does anyone really think we’re going to make it through this without doing the hard work of talking about AI with our students?” Neglecting to equip your students with AI literacy and skills does them a disservice. But it’s hard to talk meaningfully about something with which you have little direct experience. Which is why I encourage faculty skeptics to start using AI in their own work and in their classrooms.
Getting academics to teach with these new tools, however, is proving to be a tough nut to crack.
As academics, one of our superpowers is to critically analyze and push back on new ideas. It’s in our nature to do so, before we can move forward productively. But many of us are more resistant than usual when it comes to integrating AI in our teaching. And colleges are struggling to equip faculty members with the tools and training they need to teach effectively in an AI age.
Sure I’ve seen some progress. Yet despite the best efforts of campus teaching centers and tech shops, many faculty members are floundering in confusion about AI. And nearly two years after these tools were unveiled, most institutions still lack formal policies on this front. In a spring 2024 survey, only 18 percent of professors said they understood how to teach with AI, only 14 percent were confident in their ability to do so, and a whopping 86 percent felt worried.
This landscape is evolving fast but those survey results align with my experience working with faculty members at all types of institutions. I’ve observed a level of angst about AI that I’ve never seen in my almost 30 years of college teaching. Indeed, I’ve come to conclude that we’re dealing with an unprecedented existential crisis on the faculty.
AI is here to stay, and it will require changes to beloved traditions of college teaching. While some academics are embracing that fact and integrating AI into their courses in amazing and powerful ways, many others see it entirely as a negative for the profession and are experiencing the five stages of grief. It’s good to process those emotions. But while you’re doing so, why not take small steps toward our AI future?
As the lead author of Small Teaching Online, I’ve fully embraced James M. Lang’s ideas on the importance of small changes in teaching. Here, then, are five small steps to begin moving past denial, anger, and depression and toward some level of acceptance.
Use AI to prepare for class. Until now, I’ve always taught in-person courses of fewer than 25 students. But this fall, I’m teaching a new “Social Psychology” course in a large-enrolled, in-person format for the first time. As anyone who’s ever taught knows, the prep work for a new class in a new format is incredibly time-consuming. But I’m appreciating a silver lining: I’ve been using AI to streamline my course prep in ways that I wasn’t prior to this semester.
There’s a lot you can do to better support students — either in a new course or in one you’ve taught numerous times before — when you strategically employ AI.
With so much on your plate, you can use AI in thoughtful ways to save time and effort.
For example, ask an AI tool to create an icebreaker for class. Here’s a simple prompt I used this fall: “In your role as a professor of social psychology teaching an undergraduate class, please create an icebreaker that students can do in groups of four. It should be inclusive of neurodivergent students and students with differing abilities, and should introduce today’s topic: social cognition. The activity should take about 5 minutes.” After the chatbot complied, I made this follow-up request: “Recall that you teach a large-enrollment class. You have 62 groups of students. Create a unique two-sentence scenario for each group that builds on the icebreaker and is relevant to students ages 19 to 25. Provide three discussion and application questions for each scenario.”
In less than five minutes, I had the beginnings of an inclusive, interactive class session.
Don’t get me wrong. AI can’t replace faculty members. Nor am I suggesting that you outsource your pedagogical and disciplinary expertise to a bot. Students need your human experience, wisdom, and compassion as they undertake the transformational journey that is higher education. What I am saying: With so much on your plate, you can use AI in thoughtful ways to save time and effort.
Ideas on how to do that can be found among the 55 practical tips in the University of Central Florida’s free resource, AI Hacks for Educators. Another useful source is this companion site to a book published in April, Teaching with AI: A Practical Guide to a New Era of Human Learning.
Reduce learning bottlenecks with AI. While some students are increasingly savvy at using AI to help them understand challenging concepts, others hesitate, often out of fear of being accused of cheating. As the instructor, you know which points of a course are the ones with which your students will most struggle.
AI can generate quick solutions for those pain points. Ask ChatGPT to generate multiple examples or devise extra quiz questions about a difficult concept. Ask it to simplify the concept or to relate it to current events or pop culture.
Better yet, show your students how to do that for themselves.
Require students to read up on the subject. If you’re feeling ill-prepared to teach AI literacy, I have an easy solution: Assign them to read the new “Student Guide to Artificial Intelligence,” recently published by Elon University and the American Association of Colleges and Universities. Written specifically for college students, it is intended to help them understand how to use AI thoughtfully and appropriately.
I believe that AI-infused curriculum is coming, and that eventually, accreditors will encourage programs to update and revise general-education courses to include foundational information on AI use. Since that hasn’t yet happened on a widespread basis, incorporate this guide instead.
Ask them to create and critique a piece of bot-generated content. AI has limitations, biases, and flaws. It’s essential to understand what it does well and what we’re all better off doing ourselves. The challenge here is that, as novice learners, students can’t spot inaccuracies in AI output as easily as professors can.
Very often, students realize they can write better themselves than what ChatGPT has produced.
Here’s an easy and fun way to help students learn a healthy distrust of AI: First, ask students to generate an image, such as a poster for an upcoming event. These tools generally don’t perform well (at least not yet) when it comes to integrating text with an AI-generated image. What may appear to students, at first glance, to be an eye-catching, promotional poster is very likely, on closer inspection, to include some nonsense text.
Build on that strategy by assigning students to use AI to write an essay and then grade the result using your course rubric. Very often, students realize they can write better themselves than what ChatGPT has produced.
Make an “AI sandwich.” The idea of the AI sandwich: “Use AI tools for the beginning and end of an assignment, with the middle being grounded in human knowledge and expertise.” In an M.B.A. course, for example, students would ask ChatGPT to generate a plan to resolve a common workplace problem, then analyze the plan on their own to identify flaws or gaps, and, finally, provide that feedback to the bot and ask for an improved plan.
Or you could invert the strategy to emphasize human effort: Have students write a first draft without AI, get suggestions for improvement from a bot, and then revise and submit the finished product on their own. The simplicity and transferability of this suggestion means it could work well in any type of course.
Some enthusiasts have taken this idea to the next level, proposing assignments that they describe as an “AI layer cake” or even an “AI marble cake.” I admire the innovation, but for me, a three-step process is enough.
The key in all of these exercises is to get students to reflect meaningfully on their use of AI and how it helped or hindered their learning. They need to develop foundational knowledge and skills, even (or especially) in an AI age. Help them to see for themselves how these tools support or hamper their cognition. That is an important skill they need to carry with them into the workplace.
As you take these first small steps, it is crucial to continue wrestling with valid concerns about AI use in education and society more broadly. The world needs us to lead conversations, to hold industry to account, and to engage policymakers on these issues. We can’t do that if we’re not teaching and learning with AI.