The resistance began immediately. After I wrote an essay last summer on preparing to teach with AI tools, the very first comment I received was from an instructional designer casting doubt. Many faculty members, she said, had valid ethical concerns about AI and had no plans to use ChatGPT in their courses any time soon.
We’re well into the fall semester, and I am still seeing faculty members divide into three main camps over the ethics of bringing these problematic tools into our teaching:
- There are, of course, plenty of AI enthusiasts. They are embracing the technology and designing assignments to help students understand how to use it.
- Another group (and I include myself here) are what I would call AI realists: We see legitimate ethical concerns but — given that ChatGPT is here to stay — we favor figuring out how to use it and how to equip students (and ourselves) for a rapidly changing workplace.
- Finally, there is a significant pool of AI resistors. I’m hearing from many of them when I give talks on this issue. In early fall, for example, I gave a virtual presentation on this issue to a group of community colleges. Two strong naysayers insisted that it was unethical of me to even encourage faculty members to bring this “biased” tool into our courses.
No doubt that mix of reactions is as much here to stay as AI. I understand the resistance. But I also share the growing sentiment in college-teaching circles that “if you’re not using AI, you’re falling behind.” We do our students a disservice — and we do not advance equitable outcomes in education or society at large — if we refuse to incorporate ChatGPT and other AI tools in the college classroom.
I got to thinking more about the ethical and equity aspects of teaching with AI (or refusing to) after I read a blog series on the topic by Leon Furze, a writer and educational consultant. Some of the faculty reluctance is political: Instructors worry about the human-labor and environmental costs of AI, and argue that the new tools seem to reflect and reinforce existing online biases. Among the costs: A lot of instructors — especially in contingent positions — may not have the time to invest in becoming teaching-with-AI experts, adding another layer of inequity to an already-imbalanced system of faculty haves and have-nots.
Some of the concerns are about privacy issues. When professors ask students to use a particular AI tool for class, they often have to create a login. Do students realize the personal data they may have to surrender to use tools that are external to the campus IT systems? That data may be later used for unsavory purposes, wrote Justin Reich, director of the Teaching Systems Lab at the Massachusetts Institute of Technology, in Failure to Disrupt: Why Technology Alone Can’t Transform Education.
At the same time, recent college graduates are anxious about how AI will affect them as job seekers, according to a July survey. Yet an analysis of the results suggests that some faculty members are not stepping up to meet that need: “About 54 percent of students said their instructors didn’t openly discuss the use of AI tools, and 60 percent of students said their instructors or schools didn’t specify how to use AI tools ethically or responsibly.”
A more recent study found that nearly half (49 percent) of college students are using generative AI tools, but only 22 percent of faculty members. That disconnect highlights faculty hesitation to use AI as much as it reveals potential resistance. The study also shows a continuing lack of institutional guidance or policies on AI use in teaching and learning, and emphasizes the need to deal with ethical and equity considerations via such policies.
You may never be an AI enthusiast, but banning it from your courses will never work. Our students are resourceful and, in using ChatGPT on assignments, are simply doing what humans have always done — i.e., taking advantage of available tools to reduce their workload, especially if said work is perceived to be difficult, time-consuming, unimaginative, and unrewarding.
What I advocate here is choosing the middle ground. If we aim to prepare today’s students to tackle tomorrow’s problems, we would do well to teach them how to think about and use AI tools to enhance their work. Neglecting to do so disadvantages students and may exacerbate existing inequities, many of which fall along racial and socioeconomic lines. We can and should teach them (and ourselves) to streamline mundane tasks in order to free up more time and cognitive resources for the work that chatbots can’t do: genuine creativity and higher-order thinking, both of which are still unique to human intellect.
To that end, we should bring AI into our syllabi, class activities, and assignments. Here are five ideas to get you started.
Be explicit about the use of AI tools in your class. Invite students to help shape your course policy with comments and suggestions. As educational developer Maha Bali argues in a video on AI literacy, the key to an effective policy is transparency, regarding both (a) how students can, and can’t, use AI for classwork, and (b) how to disclose that use in an assignment.
Add your AI policy to the syllabus. Better yet, collaborate with colleagues to draft departmental and collegewide policies. Create a public draft of the proposed policy and invite students to comment on it, ask questions about it, and make suggestions to the wording.
Teach them how to use AI tools appropriately. In her video, Bali, who teaches digital literacies at the American University in Cairo, says she shows students how they might benefit from AI use in her class. For example, she encourages them to use AI to generate ideas, refine their first drafts, or even start an assignment with a ChatGPT-created draft and then make it their own. Writing instructors, and others for whom writing itself is a learning outcome, may not want to take that approach, but Bali’s example is instructive. It reminds faculty members to carefully analyze whether using AI will help or hurt students’ achievement in course outcomes and assignments.
Another example: Show students how to use generative AI to create visuals or logos for course projects. This strategy could be applied if you assign students to create websites, marketing campaigns, social-media posts, or promotional materials.
For more ideas on this front:
- Daniel Stanford, an instructional-tech consultant and blogger, provides a helpful (and not too overwhelming) list of ways to incorporate AI in teaching.
- Ethan Mollick, an associate professor of management at the University of Pennsylvania, wrote “Seven Ways of Using AI in Class” (plus a whole lot more on AI in various blog posts).
- For those with time and energy to spend, peruse the open-access book, 101 Creative Ideas to Use AI in Education.
Demonstrate in class how scholars in your discipline might use AI. In a recent episode on his Assess Without the Stress podcast, Caleb Curfman, a history instructor at Northland Community and Technical College, in Minnesota, told me how he modeled the effective use of AI in his history course for the spring of 2023 semester. Initially, he said, he was anxious about how these tools would affect his courses. Then he landed on an in-class activity in which he and his students asked ChatGPT to design the perfect government. In class, he and his students suggested additional questions and prompts to refine the output. This approach helped students learn how to use an AI tool without requiring them to each create a login.
Analyze AI results in class. Ask student teams to generate text, images, or code, and then evaluate the chatbot’s results. Similar to Curfman’s classwide exercise, this strategy could help avoid asking students to create logins, especially those who may not want to. It’s likely that assigning such tasks in groups will mean that at least one student already has access to one or more AI tools.
Still not convinced you should teach with AI? Then teach about AI. Assign activities aimed to help students become more critical consumers of AI. Autumm Caines, an instructional designer at the University of Michigan at Dearborn, offered several examples of how to encourage students to consider the ethical considerations surrounding AI, such as discussing climate and labor concerns related to the development and use of AI, or conducting a “technoethical audit” of tech that students might be asked to use in other classes.
Well-meaning faculty members have argued that it’s better not to require the use of AI in coursework because of the many varied ethical concerns regarding privacy of student data and the unequal access to technology. But realistically, it seems very likely that most students are already using chatbots on their phones and other devices, perhaps to cut corners on tasks they perceive to be tedious, busywork, or otherwise meaningless. Why leave the other students in the dark?
We can better prepare all students for the future if we teach them the responsible use of AI in class.