People have nightmares of AI-powered killing machines turning our world into a robotic dystopia. But a less-bloody yet equally striking transformation is already underway — one in which machine-generated text summary can turn any book or article into CliffsNotes on demand at scales we can barely comprehend. Students using ChatGPT to produce an essay suddenly seems like a mere nuisance compared with the dire prospect of them turning to AI to do all of the course reading for them.
Reading is a skill so foundational to education, to democracy, and to our notion of modern culture that we judge (and are judged on) how literate our population is compared with other nations. Yet in the near future, we’ll be able to offload close-reading skills to private corporations in exchange for instant summaries tailored to our reading level, and very likely, to our political interests.
As faculty members, we will be left with few options to curb AI-assisted reading. There is no detector that can alert you if a student is using an AI reading assistant, nor can I fathom a scenario in which a faculty member could ban this new tech from the classroom.
Existing interfaces are already being updated to include AI features that change the reading level of a text. Google Scholar just released AI outlines to help users skim texts. That follows OpenAI’s latest update for its Plus users called GPT-4o with canvas, where a user can change the reading level from a kindergarten level all the way to a graduate-school level. These text levelers are but one flavor of AI reading assistants. Google’s NotebookLM automatically summarizes texts you upload to it. A single notebook can contain 50 sources — up to four million words. Microsoft’s Reading Coach allows K-5 students to interact with AI to help them with reading fluency.
Once again, we’re faced with a slew of AI-powered interfaces that make us pause and consider their impact. These tools have appeared on the heels of think pieces lamenting the lack of reading skills among college students, including The Chronicle’s “Is This the End of Reading?” and, most recently, The Atlantic’s “The Elite College Students Who Can’t Read Books.”
The thing is: You don’t even have to read the Atlantic essay. The magazine moved some time ago to pair every digital article with its own bespoke synthetic narration. That’s right, instead of reading about how students at elite colleges struggle to read books, you can listen to an AI-generated voice read a summary of it for you.
This is technology that can truly help some students yet can also, through misuse or overreliance, seriously weaken the skills of many others.
Generative technologies are radically changing how we interact with information in online spaces. Earlier this year, I wrote a Substack post arguing that “No One Is Talking About AI’s Impact on Reading.” I went on social media to see how various AI apps were being marketed to students and found influencers selling students third-party AI apps that completely offloaded the process of reading to an algorithm.
Even if you print out all of your course readings or only assign ones locked in textbooks, the reality is that a simple scan from a mobile phone is all your students need to digitize printed text and load it into their AI interface of choice.
Sound overdramatic? It’s not. Take a look at Magic School AI, one of many tools being marketed to K-12 educators. Within it are dozens of preloaded AI features, including their own Text Leveler Tool. The developers decided to load the first chapter of F. Scott Fitzgerald’s The Great Gatsby as the exemplar activity to show teachers how easy it is for the AI to rewrite a text from an eighth-grade reading level to any level they’d like. You can give it a try yourself here.
AI reading assistants didn’t suddenly materialize out of thin air. Early on, I started doing research on generative AI’s ability to summarize texts and augment reading practices. In the spring of 2023, I piloted two AI reading assistants in my first-year writing courses, thinking here was a tool that could finally help students with hidden disabilities to process complex readings. For that population of students, AI reading assistants have been immensely helpful. However, for the rest of the students in the class, AI reading assistants were yet another way they could offload the labor of learning to a machine.
Reading isn’t something that happens when all you engage with is neatly generated AI summaries.
And that’s the most frustrating aspect of generative AI in education: This is technology that can truly help some students yet can also, through misuse or overreliance, seriously weaken the skills of many others.
Asking 18-year-old undergraduates to responsibly use something as powerful as AI is no simple feat. In my own classroom, I was eventually able to convey the problems posed by letting an AI tool do your reading for you by asking my students how they would feel if, rather than read their essay assignments myself, I used technology to do it.
“You’d be fired,” replied a student.
I laughed. I had to explain to my students that there are no policies against using AI reading assistants, largely because few people know they exist, let alone how to deal with them. We spent the rest of the class trying to brainstorm rules or frameworks. What my students came up with is thoughtful, critical, and simple: Only use an AI tool to help you read once you hit a pain point that would cause you to stop reading. I think that’s an excellent starting point, and it is one that I advocate for when I talk with students about AI’s impact on reading.
Instead of trying to ban these new tech tools from our classroom or deny their value, as faculty members, we have to be honest about the affordances that AI reading assistance can bring to students, including those who are neurodiverse and second-language learners. At the same time, we must be wary of the long-term consequences of too many students letting a machine do their reading for them.
It’s our job to start being more proactive in this evolving AI era. Can we find ways to protect and promote the value of students learning close reading skills in the traditional way? Can we also adapt to the realities of 2024 and incorporate new AI technology into our teaching to help students learn?
I think so. But it will require faculty members to be more intentional when we assign readings.
I don’t have all, or even most, of the answers as AI reading assistance is so new, but here’s one idea: Use class time to practice some good old-fashioned annotation. Those of us who teach online courses can use social annotation tools, like Perusall, Hypothes.is, or the commenting feature on Google Docs. No, this approach won’t “AI proof” your assigned reading, but it will add some friction into the process that can help students focus on engaging with a text and encourage them to use their own brains. Derek Bruff, associate director of the University of Virginia’s teaching center, has curated a wonderful collection of resources on “Annotation in Teaching and Learning. It’s an excellent example of how to use existing tools and thinking to accommodate new practices in our teaching.
An instructor who assigns students to skim a set of readings for key ideas might allow them to use AI reading assistants on that task. Skimming texts for main ideas and relevant points is a powerful skill, but I don’t see it surviving when you can upload 50 sources into a single notebook in Google NotebookLM and have the AI summarize and even synthesize information. When you think about processing information at that scale, it’s pretty easy to understand how powerful generative AI can be for researchers and how tempting for students.
Yet you’d be foolish if you didn’t pause to ask what is gained by allowing a machine to summarize material for you or your students. Reading is an act of synthesizing information, and reading isn’t something that happens when all you engage with is neatly generated AI summaries. I don’t know how I would have developed as a writer if, instead of reading works of literature, I had relied on AI to give me flattened stubs or bullet points. I also don’t know how my reading and synthesis skills would ever have improved if the only material I read had been neatly packaged at a reading level that didn’t challenge me.
Educators snapped to attention when ChatGPT came on the scene, alarmed by how disruptive it would be to our writing assignments. The mounting use of AI for reading assistance should spark fresh concerns among faculty members about the technology’s impact on reading skills. We need to start talking about our options.