It’s been a few years of head-spinning disruptions for the folks who work in faculty development on campus. First they became the front-line workers for our abrupt transition to online learning in the spring of 2020. Once higher education resumed in-person teaching, they were on the front lines again — this time, helping faculty members come to grips with a student population whose mental-health needs and neurodiversity had been exacerbated or exposed by the pandemic.
Then, just as online pedagogy and inclusive teaching moved center stage, ChatGPT and other AI tools emerged to suck all the oxygen out of the room.
Webinars, conferences, essays, social-media posts, and new books about AI are creeping like kudzu vines into every corner of the higher-education landscape, blocking out our views of everything else. Administrators are seeking visionary thinkers to capitalize on the power of this new technology, faculty members want help creating policies and revising traditional assignments, and faculty-development specialists have the task of helping both parties — not to mention students — adapt.
This faculty developer has been (publicly) skeptical about the intense embrace between higher education and generative AI — skeptical, but not dismissive. I work with people who have encouraged me to keep my eyes and mind open about these tools. One of my colleagues has jumped feet-first into the educational uses of AI with energy and creativity, offering webinars and workshops for our faculty members, teaching a course on the subject, and experimenting with how academics can use it to become more efficient in their teaching-preparation work.
While I intend to continue learning from him and others, and maintain my curiosity about the role that generative AI will play in our futures, I am more and more convinced that — on this front at least — higher education needs to re-embrace one of its core virtues: slow-walking.
That term can have a pejorative connotation, referring to a practice of delaying something without being honest about your intentions. But slowness has been lauded for its virtues in many quarters — in enjoying your food or resisting the “culture of speed” in the professorial life. The slow-walking I recommend here doesn’t mean resisting, but creating more space for reflection and discussion as we enter this new era of human history.
Of course higher education has long been accused of being a slow-walking animal. Its deep traditions have often not served the needs of changing students, or made enough of an effort to deal with contemporary social or political concerns. Nothing demonstrates our stiff gait more than the persistence of teaching practices that run counter to the research on how humans learn. Mounds of evidence will demonstrate that students learn best when they engage in active learning in the classroom, and yet we cram hundreds of students into auditoriums to copy down the words of a distant lecturer at the front of the room.
Slow-walking our embrace of AI thus doesn’t come without perils and might not apply in every context. Without question, some fields should move quickly to help students prepare for careers in which they will use this technology, and guide others to do so. Many employers are already expecting those skills. Generative AI can also offer special benefits for some students or faculty members, bringing economy or efficiency to tasks that have been especially time-consuming or difficult.
But it gives me pause when I hear exclamations of how these new tools will make higher education more economical or efficient. Most of our work centers on human learning. Economy and efficiency does not necessarily produce better learning; in fact, they sometimes work actively against it. For that matter, do economy and efficiency always lead to greater human flourishing? Or positive developments in human history?
This summer I will return to the classroom after a three-year hiatus. As I have been reflecting on the role that AI will play in my future courses, I have been working to develop a set of four principles to guide my thinking. I share them here not because they are groundbreaking or universally applicable — no doubt they reflect both my humanities background and my faculty-development role — but because I hope that they might inspire you to pause and reflect on your own guiding principles. The first two are principles I have consistently maintained throughout my three decades of teaching; I would make the case for them when confronted with anything new in higher education. The remaining two have not always been core values for me but have risen to the forefront of my thinking over the past year.
Principle No. 1: Variety. People learn differently. That doesn’t mean you should try to design teaching practices to suit every stated learning style (i.e., auditory, visual, or kinesthetic), a myth that has been thoroughly debunked by education researchers. But of course, human brains and life experiences are singular. Most brains share a basic architecture, but the structure gets remodeled throughout our learning lives. A teaching technique that might capture the attention or unlock the understanding of my seatmate might pass me by entirely.
You would drive yourself to distraction by searching for a teaching strategy that will reach every student; no such strategy exists. The best you can do is offer multiple pathways to engage with the material, and hope that everyone in the room will discover the one that makes the difference for them. Further, it sometimes benefits me as a learner to struggle along a pathway that my seatmate finds to be a smooth track. I should be comfortable with some assignments; other times I should be challenged.
When it comes to artificial intelligence, then, I would argue that we should consider it as just one of many pathways — not the one that will necessarily replace the rest of them. For example, writing teachers (like me) ask students to organize their thoughts into essays. ChatGPT can do that work for students, but does that mean I should stop teaching students how to write well-organized essays in a composition class, or in any kind of class? Of course not. Organizing one’s thoughts into a form that can be understood by other humans is a skill that helps us in many areas of life.
Instead of either abandoning my commitment to writing skills or hoping that my students will never make use of ChatGPT, I plan to embrace the principle of variety. After assigning a new essay, and giving students time to do research on their topic, I’m thinking about creating two classroom experiences to walk them through the use of two different processes to create an outline:
- For the first assignment, I might ask students to spend 10 minutes drafting an outline in class. They will share their ideas with a small group, receive feedback from me, and then revise the outline outside of class.
- For the second assignment, I will invite students to paste a rough thesis into ChatGPT and ask it to produce an outline. Students will evaluate the result in a short writing exercise or discussion in class, and then revise the AI’s outline as they see fit before composing their essay.
Principle No. 2: Transparency. I can imagine the question that students might have after those two assignments, even if they don’t articulate it to me: Why did we bother with the first exercise when ChatGPT could have done that work for us the whole time?
Answering such a question will have to become deeply intertwined with our teaching practices in the future. In this case, I would explain to students that the skill of organizing one’s thoughts will have broad applications in their lives. No single experience of practicing a skill will affirm it eternally in a learner’s brain. But practiced in multiple domains — in English, philosophy, economics, and biology courses — it will eventually become part of their cognitive skill set. That sort of explanation could be delivered in person or via an assignment sheet that aims to clearly spell out classroom rules that went “unwritten” in the past.
I understand the questions that people (especially students) are asking about whether traditional assignments or activities should remain part of our teaching practices in the era of AI. But we need to be open and upfront with students about the skills they will need to be successful in their work and lives — things like writing, speaking, organizing their thoughts, solving difficult problems, and understanding other viewpoints. We should not so lightly abandon techniques that have been successful in the past. But we can become more transparent about them.
Principle No. 3: Sequencing. AI advocates like to compare technology slow-walkers like me to the people who resisted the calculator, another tool that we now take for granted. Sure, I use my smartphone calculator all the time, but I only have that capacity because I understand how numbers work. I had lots of practice counting, adding, and subtracting things on my own, which laid the foundation for my ability to multiply and divide large sums with my calculator or use formulae in a spreadsheet.
The same is true of AI. Students come to us with varied preparation. Some need continued practice at college to master certain writing, thinking, and computational skills, while others don’t need such reinforcement. We can use AI tools to move students beyond the basics and create new pathways for their learning. But we should be careful about jumping directly into AI with every skill.
That’s where the principle of sequencing is vital. How we use ChatGPT in a senior seminar might not work in a first-year course, or in the third week of the semester versus the 15th week. In the two-part outline assignment, it matters that they draft it first on their own in class, without the aid of technology. After having done the work with their own brains (and the brains of other humans), they will be better prepared to evaluate and critique an outline produced by ChatGPT. In other cases, the reverse sequence might make more sense.
What matters is that we apply intentionality to the order in which we introduce or allow generative tools in our courses.
Principle No. 4: Reflection. Most skills will benefit from a combination of practice and reflection: We try something, we think about what happened, and then we try again. The space between those two attempts gives our brains time to process and improve. It turns out that even AI benefits from slowing down its “thinking” process, as one group of researchers recently discovered. The speed that we prize, for efficiency and economy, can work against the best possible results.
Those of us who teach writing will often have students write multiple drafts of an essay, with plenty of processing time between two drafts. Even when we ask students to work with AI, we can mimic that process by walking them through a classroom exercise. For instance, we could have students give ChatGPT an initial prompt, and then ask them to pause and reflect on the result through writing or discussion: What happened here? Why did it produce that result? Guiding students via a series of prompts and re-prompts might bring that reflective quality to our work with AI — and result in better creations from both humans and bots.
Slowness in teaching and learning has many benefits and can take many forms. But we live in a world that values speed, and plenty of administrators and students want results delivered as quickly and effortlessly as possible. Those of us who teach should not hesitate to remind them of the virtues of slowing down.