What became immediately clear, after sitting in on the first few presentations, is that people are hungry for practical information on how to understand AI and its relationships to learning. That could mean seeking guidance on how to create a course chatbot or jumping into a nuanced discussion of academic integrity and assessment.
And while much, if not most, of academe remains deeply skeptical of generative AI’s effects on learning, many attendees wanted to prepare for how it will intersect with their teaching. Some of the more noteworthy sessions I sat in on covered these topics:
Creating an AI course tutor: This was a popular session topic, as the idea of building a chatbot from your course material carries particular appeal for some instructors. In one session, Art Brownlow, a professor of music and deputy provost’s fellow for academic innovation at the University of Texas-Rio Grande Valley, showed the audience how, with carefully designed instructions, he could upload an open-source textbook to ChatGPT Plus and create a tutor bot for a philosophy course that students could access through the free version of ChatGPT.
Brownlow also demonstrated how he created another chatbot based on his syllabus, so that students could immediately get answers to common questions — such as when an assignment was due or what the course late policy was — without having to flip through pages of documents. Faculty members clearly liked the idea of giving their students extra support, with guardrails, through AI.
Better feedback with AI: Another series of sessions focused on using AI to provide feedback to students as they work through an assignment. Amanda M. Main, associate chair and lecturer in the department of management at the University of Central Florida, described how she uses Claude — which she considers the most human-like of the generative-AI agents — to support students as they prepare for an in-class exercise on conflict negotiation.
Main, who teaches multiple sections of the course and says it would be impossible to give students in-depth feedback on her own, anonymizes students’ drafts and uploads them to Claude. The AI, which has been given detailed instructions and her assignment rubric, produces an analysis telling students where their preparation might be a bit weak and offers other things to consider. She quickly scans Claude’s comments to make sure they’re on target, then sends it on to the students.
Main said she found that students come better prepared to the live role-playing they do a couple days later. She grades the students on their in-class work, she notes, not on the formative feedback given by Claude. Students’ role-playing has been stronger, and their course satisfaction has increased because of Claude, she said.
Keeping AI out of assignments: Perhaps the most popular session I attended — standing room only — focused on AI resistance. Ashley Evans, a computer-science professor and chair of the software-development and cloud-computing program at Valencia College, provided a detailed presentation on how she creates assignments that are hard or impossible to do well with AI, something that is of particular concern for her asynchronous online courses. Some of her strategies include grading centered on specificity, rather than general knowledge, and multilayered assignments that involve interviews, analysis, position papers, and a video defending that position, rather than a single research paper.
New forms of assessment: Evans’s presentation was one of many that dug into how to design assignments and assessments that take AI into account. As several presenters noted, not only can AI complete traditional assessments well, students will probably find courses that measure them on how well they memorize a set of facts, or how well they can produce a standard essay, to be pointless. Many want to understand AI better, learn how to use it ethically, and develop those skills that are uniquely human.
Kiera Allison, an assistant professor of commerce at the University of Virginia, argued that the best way forward is to ask students to collaborate with AI and then assess the quality of that collaboration. The key to making such an assignment work, she said, is to give students work that would be very difficult to do on their own or with AI alone. In the assignment she used to illustrate this idea, she asked students to choose a persuasive task that seems unusually difficult or impossible, then work with AI to complete the task.
The assignment is scaffolded with multiple formative and summative assessments, and students must collaborate with AI in different ways (role-playing, debating etc). Students also have to describe in detail how they worked with AI throughout the project.
If this all sounds overwhelming, it certainly can be. Professors are being asked to AI-proof their assignments, discern how and what students are learning in ways that ensure academic integrity, and prepare students for a world in which AI will likely be used in their future careers. That’s a lot.
After the conference, I spoke to one of the main organizers, Kevin Yee, special assistant to the provost for artificial intelligence and director of the Faculty Center for Teaching and Learning at the University of Central Florida. Yee said the conference was intentionally designed around a “crowdsourcing” approach to AI. “This is really, in my opinion, the wickedest and thorniest problem to come to higher education since I’ve been alive,” he said of AI. “And the answers are not apparent.”
There are good reasons for educators to be dubious about AI, Yee noted. It offers affordances, for example, that can look like shortcuts, which allow students to offload their thinking to a tool. “But I don’t think the correct reaction to it is to ban it or to ignore it or whatever. I think we have to find some nuanced path that threads the needle here,” he said. If students can be convinced to use AI as a thought partner rather than an assignment doer, he said, then hesitant faculty might come on board. But, he admits, “it’s clouded by things like, they don’t understand it themselves.”
I’ll be digging more deeply into some of these strategies, along with professors’ concerns, in the coming weeks. If anything I’ve described sounds similar to something you’ve been working on in your courses, or if you have different ideas to share, drop me a line at beth.mcmurtrie@chronicle.com. Your story may appear in a future newsletter.
AI and reading
Last May, I reported that a startling number of students lack critical reading skills or the ability to read at length without getting distracted or mentally fatigued. My story struck a chord: It became one of The Chronicle’s most-read pieces of the year. Twelve months later, I checked back with some of the people I’d spoken with to find out what has changed. I encourage you to read the story, “The Reading Struggle Meets AI,” but here are the highlights.
Reading remains a problem. Most professors told us that students’ reading challenges have not gone away. They continue to struggle with long readings. They often get lost in the text. They don’t see the point of having to read long or complicated material. “Many of our students, they are saying that reading is a time waste that makes things harder rather than more understandable,” wrote Chris Hakala, a psychology professor at Springfield College.
AI has made things worse. ChatGPT and other AI bots are pretty good at summarizing textbook chapters, articles, even books. If you’re a student who struggles with reading in the first place, professors say, these tools give you the false feeling that you understand the assigned material. “[N]ot only is that not really true,” wrote Andrew Tobolowsky, a professor of religious studies at the College of William & Mary, “when I have them respond to passages of various kinds in the classroom that I didn’t assign at home, many struggle, or seem to struggle, to arrive at any real takeaways, even so far as what literally happens over the course of the text.”
There are some bright spots. Some professors who teach small, discussion-based humanities classes believe the pandemic hangover is finally lifting. Students are more interested in diving into books — or at least they know that they should be reading. Still, students’ minimalist approach to reading means professors are assigning far less material than before.
Professors may be part of the problem. Liz Norell, associate director of instructional support at the University of Mississippi’s Center for Excellence in Teaching and Learning, dug into the reading dilemma on her own campus. Students’ No. 1 complaint is that professors never mention assigned reading in class. The second-most-common reason they don’t complete reading assignments: They don’t know what they’re supposed to be reading for. Norell can sympathize. “I’ve had that experience in multiple classes,” said Norell, who likes to take college classes for fun. “If I’m having that trouble, I can’t even imagine how undergrads are feeling, where maybe the opacity of academic writing only makes that harder.”
The multimedia future is here. Some professors question whether everyone’s reading habits have changed. “There are old articles, old journal articles, that I’ve always used, and I look at them from 2025 eyes, and I think, Oh, I don’t want to read that,” said Susan Blum, an anthropology professor at the University of Notre Dame. Blum has switched to shorter, more colloquial works that may include multiple elements: text, images, videos, interviews. She still aims for each work to be “accurate and responsible and scholarly in its underpinnings but more approachable.”
I’m always interested in hearing how people are tackling the reading challenge. Have you found ways to get your students to dig into books? Have you moved toward multi-media? Write to me at beth.mcmurtrie@chroincle.com. I’d like to hear your story.
Thanks for reading Teaching. If you have suggestions or ideas, please feel free to email us at beth.mcmurtrie@chronicle.com or beckie.supiano@chronicle.com.
Learn more at our Teaching newsletter archive page.