Since the emergence of ChatGPT, one of the most frequent questions we hear from faculty members who request instructional support is, “What should I say about AI on my syllabus?” Most of the time, what they’re really asking is: “How do I police the use of AI in my classes?”
For good reasons, many educators worry that misuse of one kind of intelligence (“artificial”) will diminish another kind (“human”). But that’s a false binary. Intelligence resists neat categories. After all, even Howard Gardner’s beloved theory of multiple intelligences could not be clearly defined in empirical studies. People routinely think with the aid of tools: Things like hearing aids, prescription glasses, or heart monitors all extend the capabilities of the humans who need and benefit from them. Not only can technology extend intelligence; it can do so in ways that level the playing field and create a more equitable society.
At this point, the question isn’t so much whether AI will replace other kinds of intelligence, but rather, how it will augment our thinking. Many students have described how generative AI has helped them with tasks such as brainstorming or outlining. Policing something that is already so entangled in students’ lives is an exercise in futility.
As educators, we face an extraordinarily authentic learning moment. Rather than worry about drafting the perfect syllabus policy, faculty members would be better served by asking a different question: How can we prepare students to thrive in a so-called artificially intelligent world? The college classroom is the perfect place to craft a collective answer. In what follows, we offer some strategies to do what educators do best: create an environment for transformative learning.
Prompt students and AI bots to take turns extending one another’s “limits.” During classroom discussions or activities, ChatGPT can come in handy when students run out of steam. Classroom “conversations” with AI can inspire new ideas or directions of thought. Two examples from our own teaching:
- Torres: I ask students in my writing and capstone courses to develop essential questions that they explore throughout the semester. Formulating good questions requires familiarity with a topic, which many students do not yet have. So they use ChatGPT to summarize prior research on a topic, suggest possible lines of inquiry, and shape their research question. For example, a first-year student wanted to explore the relationship between music and emotion. ChatGPT provided multiple summaries based on disciplinary perspectives (e.g., psychology and neuroscience). It could not provide a summary from a visual-arts perspective, so the student landed on the question: “How do we experience music when consumed visually, such as through YouTube videos?”
- Nemeroff: I ask my upper-level students in business-information management to use ChatGPT or other large-language models to understand abstract or tricky technical concepts, such as how to use a programming language like SQL. When my students attempt to compare data sets in SQL — say, information on customers and orders — they don’t always understand the process. The syntax in SQL is challenging for students to write, and it trips them up every time. If they can use AI to write the syntax, they are better able to practice comparing data sets.
On a 2023 episode of the Dead Ideas in Teaching and Learning podcast, Cynthia Alby, a professor of teacher education at Georgia College and State University, said AI can operate as training wheels during tricky developmental steps in a student’s learning. For instance, you could invite students to use ChatGPT to draft an outline for a literature review before assigning the actual literature review.
Integrating AI in difficult activities can help students break through key misconceptions more productively. Likewise, that approach can help faculty members to break through conceptual challenges in course design and preparation, such as writing student-friendly learning objectives. In both cases, a human is in the loop, co-creating the content with the AI.
Encourage students to question how ChatGPT and other AI tools know anything. Ask students to do research on what is known and unknown about how these tools are built and how that influences their ability to develop a research topic or extend an argument. Discovering the opacity surrounding these closely guarded trade secrets leads to a healthy skepticism of AI’s perceived magic.
Students (and, frankly, all of us) need to develop an emerging skill set known as AI literacy to gauge when these tools are helpful. Students can begin recognizing the limits of AI usefulness or trustworthiness — such as when they need more recent information than the latest version of ChatGPT provides, when the AI produces an inaccurate response or source, or when meaningful personal reflection is required. AI literacy offers a framework for an important awareness.
Require AI and students to take turns “fact checking” one another. Most concerns around AI focus on writing. One innovative strategy involves asking large-language models to help students with reading. For example, after annotating an entire text, students could feed sections of the same text into ChatGPT and compare its responses with their work.
Let’s say students read an excerpt of Michel Foucault’s Discipline and Punish: The Birth of the Prison on panopticism. After discussing the excerpt, they could ask ChatGPT to provide historical context (e.g., what happened during the Plague that inspired some of Foucault’s thinking). With that added background, they could revise their annotations or produce a summary that integrates ChatGPT’s notes with their own. Or students could copy and paste confusing portions of the excerpt into ChatGPT and ask for a translation suitable for a general audience, helping them refine their annotations.
Additionally, students could ask ChatGPT to produce a reference list of other scholarly views on panopticism. Or students might work with the campus library to cultivate a reference list, and then use AI to filter those references based on their relevance. Say a student wants to do research on surveillance in social media; ChatGPT, positioned as a research assistant, can save time by presenting references on the social-media angle and allow the student to focus on making the connection with panopticism.
Encourage students and AI to experiment and follow up. “Prompt engineering” — that is, “designing inputs for AI tools” — is emerging as its own budding career path. These AI whisperers and safety guardians are key to the development of effective and safe models, and many of their design skills are helpful to any AI user. To that end, ask your students to:
- Explore some publicly shared AI prompts and unconventional, unexpected, and odd responses, such as when GPT-4 started getting “lazy” in November 2023, leading to OpenAI’s acknowledgment and clarification on X. Such public interactions can not only remind students that AI is fallible, but reinforce that human knowledge remains valid.
- Practice giving the tool more — and less — information at the start of a prompt. Students will see how creating a sharper, most substantive prompt makes it possible for AI to give compelling responses. One of us (Nemeroff) found it transformative to realize that, while the initial prompt is important, the subsequent replies and refinements allow you to get more out of the model.
- Learn about how AI models and policy makers are “red teamed” and tested for safety before their public release to prevent them from propagating offensive content, disinformation, and official political campaigning. Students can do research on innovative AI policy ideas, like a proposal by the start-up company Anthropic to create “a constitution for an AI system” based on values found in places like the Universal Declaration of Human Rights. Ask students to suggest ethical ways to engage in productive inquiry with an AI model.
In general, try to avoid assignments that seek a stand-alone answer in favor of an extended critical dialogue. We need to move away from such one-step interactions and toward a robust set of procedures and strategies to make AI work for us in smarter, more effective ways. In other words, the wisdom of scaffolding once again becomes a paramount instructional strategy.
Ask students and AI to formulate problems, not answers. It’s a very human reaction to seek a quick solution to a problem we barely understand. With AI, however, we might have an opportunity to slow down our problem-solving impulses for meaningful critical thinking.
AI can help students define and delineate historical and social contexts to particular problems. For example, students wanting to investigate climate change can learn via ChatGPT about the industrial revolution, the green revolution, and current environmental laws far more quickly than if they had to navigate pages and pages of Google results. When they run into an issue, students can prompt the AI tool to ask them questions that might help them refine and elaborate on the topic.
In a post on LinkedIn, Ethan Mollick, author of Co-Intelligence: Living and Working With AI and an associate professor of management at the University of Pennsylvania, described an assignment in which he asks students to use AI to “simulate three famous people from history to criticize your business idea,” come up with 10 weak points, and articulate a vision of success.
There are incredible opportunities and limitations in such uses of AI, and the whole point is getting students to realize that in their own practice. While AI can efficiently provide relevant facts, only students can provide the lived experience in a synthesis of problem formulation.
As much as faculty members might want to be the gateway into a controlled environment that upholds a particular kind of intelligence, we might be better off as cognitive curators, showcasing constructive ways to integrate emerging AI technology into teaching and learning. By acknowledging the strengths and weaknesses of these tools, educators can empower students to think critically, engage with primary sources, and reflect on their own social identities.
We have to embrace the evolving landscape of AI and educate students to use such tools discerningly. Rather than thinking about syllabus policies and policing our classrooms, we need to think about practicing intelligence plurality.