Last fall one of Jacqueline Fajardo’s students came to her office, eager to tell her about an AI tool that was helping him learn general chemistry. Had she heard of Google NotebookLM? He had been using it for half a semester in her honors course. He confidently showed her how he could type in the learning outcomes she posted for each class and the tool would produce explanations and study guides. It even created a podcast based on an academic paper he had uploaded. He did not feel it was important to take detailed notes in class because the AI tool was able to summarize the key points of her lectures.
Or subscribe now to read with unlimited access for as low as $10/month.
Don’t have an account? Sign up now.
A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.
Last fall one of Jacqueline Fajardo’s students came to her office, eager to tell her about an AI tool that was helping him learn general chemistry. Had she heard of Google NotebookLM? He had been using it for half a semester in her honors course. He confidently showed her how he could type in the learning outcomes she posted for each class and the tool would produce explanations and study guides. It even created a podcast based on an academic paper he had uploaded. He did not feel it was important to take detailed notes in class because the AI tool was able to summarize the key points of her lectures.
Fajardo was floored. She knew about generative AI but hadn’t heard of this particular program. The University of Delaware, where she is an assistant professor in the department of chemistry and biochemistry, didn’t provide access to it for security and privacy reasons. But her student had downloaded it using his personal email account.
As he described this great study aid, Fajardo’s teaching reflexes kicked in. She cautioned him that his approach to learning chemistry was quite passive. One of her assignments, for example, asked students to wrestle with some complex ideas and, through close reading and discussion with their classmates, prepare a presentation for a general audience. Now AI was doing that work for him.
This student was dedicated and came to office hours regularly. Later, after he did poorly on an exam, she implored him to study the material on his own, walking him through strategies such as creating concept maps to better understand the connection among ideas.
But Farjado could not convince him that using AI as a first resort was inhibiting his ability to learn, even as his grades in general chemistry remained below average. Eventually, he was taken off the honor’s track. He still stops by her office, and Fajardo wonders if he connects his poor performance in her course to his use of AI. “He very authentically believed that he was doing a good thing for his studies,” she said.
ADVERTISEMENT
Farjado wrestles with her role in all of this as well. “I felt responsible in some way that I didn’t know about this tool,” she said, when her student had been relying on it in a way that seemed to backfire.
She does not want to be in that position again. The experience convinced her that everyone on campus — be they students, teaching assistants, or professors — needs to understand how AI works and how to use it. She thinks if she had had the language in which to speak about AI, she might have been able to persuade the student to study differently.
“I need to work with AI professionals on campus to learn, even, what do I say?” she said.
Fortunately for her, Delaware is one of a growing number of colleges with plans to develop an AI-literate campus. Institutions are coalescing around the belief that, as AI reshapes the world, it will reshape what it means to be college-educated. Exactly how remains to be seen. But AI literacy, they believe, may be a way to break down barriers between those who see the technology as ripe with potential and those who see it as profoundly harmful.
ADVERTISEMENT
Many instructors do, in fact, see generative AI as a threat to learning and continue to prohibit its use in class. According to recent surveys, most faculty members are concerned about unethical or excessive use of AI by students. Yet like Farjado, most also want to understand it better and say the main barriers to doing so are a lack of time and too few resources from their colleges.
Meanwhile, these tools are often marketed directly to students and used outside the sight lines of their professors. An OpenAI analysis found that about half of students ages 18 to 24 use AI for starting papers or projects, summarizing long texts, brainstorming projects, exploring topics, and revising writing.
To a lesser degree they also use it to solve math problems, prepare for exams, write computer programs, create lesson plans, learn languages, ask advice, search for jobs, organize their schedules, and do career-related writing. AI has, in short, become an all-purpose tool.
Students are hardly outliers. AI is being deeply integrated into the jobs they will do, the media they consume, and the industries they will interact with, such as health care and banking.
ADVERTISEMENT
So it’s perhaps not surprising that, in a recent survey of college leaders by the American Association of Colleges and Universities (AAC&U), 14 percent said their campus had adopted AI literacy as an institutional or general-education learning outcome. Among colleges with more than 10,000 students, the figure was 20 percent. But that’s still a stark minority of institutions, given how rapidly AI has advanced and proliferated.
“We’re currently teaching the last students who have a sense of before and after” generative AI, said Matthew Kinservik, an English professor and former vice provost for faculty affairs at the University of Delaware, who is helping lead the AI-literacy charge. “The ones who come on campus in three or four years, it’s just going to be the water they’ve swum in all their lives.”
Still, being immersed in AI is not the same as being literate in it.
The term AI literacy can feel squishy. But the definitions circulating among campus working groups, disciplinary associations, and other organizations share several key components. To be AI literate, they agree, you must understand how generative AI works, be able to use it effectively, know how to evaluate its output, and understand its weaknesses and dangers. For AI skeptics, that last point is crucial. Too many workshops stop short, they say, focusing only on how to use AI tools.
AI-literacy advocates argue that this framing encourages academics to see it as an extension of the communication, information-literacy, and critical-thinking skills that colleges are well-equipped to teach.
Of course, it’s more complicated than that.
For one, the challenge of ratcheting up professors’ and students’ understanding of AI is immense. More than two years after ChatGPT stunned the world with its speed and fluidity, much of higher education remains in a reactive mode. A handful of institutions have invested heavily in AI tools and training, but many colleges cannot afford the expense. Plenty of faculty members have attended workshops and experimented with AI in their teaching, but campuswide strategies and departmental and disciplinary discussions have been limited. A survey conducted in November by Educause, a nonprofit focused on technology in higher education, found that only 22 percent of respondents said their college had an institutionwide approach toward AI.
ADVERTISEMENT
A survey of instructors by Ithaka S+R, an education-focused nonprofit, last spring concluded that despite a proliferation of resources focused on how to use generative AI in teaching, faculty members were still struggling with how to integrate it into their courses.
“Only 18 percent of respondents agreed or strongly agreed that they understand teaching applications of generative AI,” the authors wrote, “and only 14 percent agreed or strongly agreed they feel confident in their ability to use generative AI in their instruction.”
As use of that all-purpose tool spreads among students, colleges trying to catch up and expand AI literacy are taking different approaches.
Delaware activated an AI working group in early 2023, pulling in people from across campus, including the provost’s office, the library, academic technology services, and the teaching center. Those AI leaders have met with department chairs and deans to discuss AI’s impact, and designed training programs for people across campus. Delaware also joined a project led by Ithaka S+R in which 18 universities are conducting research on how AI will affect teaching, learning, and research on their campuses.
ADVERTISEMENT
Meg Grotti, associate university librarian for learning, engagement, and curriculum support, said that groundwork has been crucial as they start to dig into how to weave AI literacy into the curriculum. People need to understand the magnitude of what’s happening before you can ask them to change how they work.
When ChatGPT came out in November 2022, Grotti noted, people compared it to the calculator, as if it were a time-saving tool with limited application. That is a poor analogy for what’s actually taking place, she said. “This isn’t a calculator. This is the internet. It’s going to usher in huge cultural changes because of what it allows.”
It generated in 30 seconds something that would have taken hours.
To begin the work of defining AI literacy in more concrete, and discipline-specific, terms, the university focused a three-day winter institute, held in January, on asking groups of faculty members from different departments to discuss what AI-literacy concepts and materials could be relevant to their majors and plan whether or how to add AI literacy to their curricula.
One participant, Madeline Hagerman, an assistant professor in the art-conservation department, had been using generative AI for a while, primarily in course design. Her first experience came last spring when she asked ChatGPT to create a rubric. She uploaded a description of an assignment, a list of what she wanted to assess, and how many points to assign to each assessment. She was astonished. “It generated in 30 seconds something that would have taken hours.” That experience, she said, changed her view of AI. “This is a tool I can use,” she thought, “to take care of those tedious tasks.”
ADVERTISEMENT
Since then, she has turned to AI to design strategies to spark class discussion and make class activities more hands-on. She is the only member of her department so far to use AI, something she attributes to her age. “I am the youngest member of the faculty, so for me it comes more naturally to look at these alternate tools in teaching.”
She and two colleagues found the winter institute useful to begin thinking about AI literacy in the world of art conservation. They decided to add a sentence about emerging technologies to the program goals for the undergraduate major and plan to explore AI-literacy skills in more detail during a faculty retreat this summer.
Donna Woulfe, an associate professor of biological sciences, also attended the AI institute with several department colleagues. They discussed how AI is shaping teaching in their discipline, enabling certain kinds of research, and being used by the industries in which their students will work.
Woulfe and her group brought what they learned back to their colleagues, who decided that they should incorporate AI into how they define several of their programs’ educational goals for graduate and undergraduate degrees. One goal is understanding and applying the scientific process, including experimental design, data analysis, and interpretation. Because of the “tremendous increase” in industry use of these tools, Woulfe said, discussion of AI is a natural fit. Pharmacogenomics, for example, which looks at how a person’s DNA affects their reaction to drugs, uses machine learning, a form of artificial intelligence.
ADVERTISEMENT
AI literacy applies to other program goals differently. For the goal of critical thinking and quantitative reasoning, students need to understand that they can’t abdicate critical thinking to AI. For the goal of written and oral skills for communicating scientific ideas, Woulfe said, they need to see that AI use could be potentially helpful or harmful. Either way, it must be discussed.
“I think we’re going to see a variety of opinions on the uses of these tools,” she said. “But I will tell you for certain students will come out being exposed to it somewhere in their biology curriculum.”
Some faculty members who attended the institute will experiment with test cases to help them and their colleagues think through what works.
Hagerman is designing an assignment around ChatGPT’s ability to produce written descriptions of photos of pieces of art, something that art conservationists are trained to do. In her experiments, it did an excellent job of describing a Wedgwood vase and a poor job of describing an archaeological bronze brooch from Tell Jemmeh, a Bronze Age site. That, she said, is a reflection of the existing bias within the field of conservation toward Western art. That bias is important for her students to be aware of, she said, as they enter the work force.
ADVERTISEMENT
“I work on crumbly bits of metal or broken ceramics,” she notes of her specialty, which is object conservation. “I think ChatGPT has a long way to go before it can be a useful tool for my kind of conservation.”
Ricardo Rey for The Chronicle
Discipline-specific conversations are one way colleges are sharpening their thinking around AI literacy. Understanding how AI is being used by industry is another. That’s a particularly salient issue at the University of Baltimore, said Jessica Stansbury, director of teaching and learning excellence with the campus teaching center. About 60 percent of the university’s students are enrolled in graduate programs, and its four schools and colleges — in law, business, public affairs, and arts and science — focus heavily on applied and community-engaged learning.
During a three-day AI summit last June, the university invited representatives from local community organizations and businesses to join students and faculty members in a conversation about what it means to be AI literate. As a group, they decided that literacy combines practical knowledge with ethical awareness, and that colleges should play a role in preparing students and faculty to use AI tools to contribute meaningfully and responsibly to society.
ADVERTISEMENT
Speaking with people from off campus helped professors look at AI more broadly, said Stansbury.
Knowing that businesses use generative AI to, for example, perform analyses helped instructors think about their role in developing students who understand these tools and how to use them responsibly. “That’s not something that can happen,” Stansbury said, “if faculty stick their heads in the sand.”
That need is likely to become only more urgent. AI is reshaping organizational structures and career paths within industries, as the technology takes over both routine tasks and those requiring technical expertise. According to one recent paper, “The Labor Market Effects of Generative Artificial Intelligence,” 30 percent of respondents surveyed said they had used generative AI in their jobs. Those tools are more commonly used, the authors found, by younger workers, people with more education, those with higher incomes, and people in industries like customer service, marketing, and information technology.
The university plans to hold another AI summit in June, during which industry leaders will be asked for specific details of how they use the technology. That will help faculty members think concretely about the particular facets of AI literacy that are relevant to their coursework.
ADVERTISEMENT
“I think what happens is we hear AI literacy and everyone’s like, ‘Yes! We have to put this in the curriculum right away!’’’ Stansbury said. “That’s where the faculty resistance and that fear come in, because it’s like, ‘Well, for what? I don’t need to have this in my profession.’ But business might need to have it, or law might need to have it. So let us take a step back and reach out and figure out what it is for them, so we can build something that makes sense for us.”
Some colleges are identifying departmental ambassadors who can teach AI skills to their peers. The University of Virginia adopted this strategy, training 51 faculty members from across the institution to raise awareness and develop AI literacy among colleagues through events such as workshops, book groups, and consultations.
UVA is highly decentralized, so making AI literacy an institutionwide priority “was never going to happen,” said Michael Palmer, who heads the university’s Center for Teaching Excellence. Rather, it is working to create a space in which schools and departments that want to gain literacy can do so.
These faculty AI guides, as they’re called, are paid $5,000 each and “didn’t have to come in with a ton of experience,” said Palmer; they received training and ongoing support. Since last August, the guides have held more than 70 events and conducted more than 175 one-on-one consultations. They tend to hear things that people in the teaching center do not, he said, and then they can share that with other guides and the teaching-center staff.
ADVERTISEMENT
UVA faculty members are fortunate the administration has provided generous financial support, to include an institutional license for Microsoft Copilot and money for course-development work, Palmer noted. “Sometimes we feel like we have to whisper that we have all these resources.” For its part, the university plans to “share everything and publish everything” that they’re working on, including discipline-specific advice in teaching.
The splashiest strategy, of course, is to sign multimillion-dollar deals with AI companies, on the idea that providing access to an array of tools and training will encourage deeper engagement with AI. Arizona State University was the first to sign a major deal with OpenAI and has created the “AI Innovation Challenge” to encourage experimentation in teaching and research.
More than 600 proposals were submitted, and 3,000 faculty members — more than half of the total — have taken some AI training, said Anne Jones, ASU’s vice provost for undergraduate education. This training was designed by faculty members, libraries, and instructional designers, Jones said, noting that “we haven’t mandated anything,” and that while some professors who participate in training have no plans to use the technology, they want to understand how it works. ASU’s asynchronous training program for faculty allows them to take what they need from it. A philosophy professor teaching about ethics, for example, could adapt something from the course to talk about AI ethics.
While not every institution will have the resources to create its own AI platform or conduct AI research, “every institution can have an authentic conversation around what has value for students in the classroom and how we might help instructors get started in that context,” Jones said. “Even if it’s just a prompt for a 15-minute activity in a class, you can design that. Start with the low-barrier things and then ask what’s working. Ask what people need. Engage in the conversation.”
ADVERTISEMENT
Whether these investments, training, and discussions are changing many professors’ views about AI is hard to know. Mandates don’t go over well with academics, so uptake is voluntary. And faculty members tend to look skeptically at certain approaches. Faculty unions have raised alarms, for example, when administrators discuss the possibility of outsourcing some of their work to AI. And professors have pushed back against some multimillion-dollar deals.
If you offload onto AI the very cognitively demanding aspects of the learning process, then like a muscle atrophying, you’re weakening that process over time.
When the California State University system announced a partnership in early 2025 with several tech companies, including reportedly paying OpenAI $16.9 million, members of the California Faculty Association denounced the investment, noting that the system is facing significant budget cuts. It also criticized the lack of consultation with faculty, students, and staff. “The implicit message there is that as positions dry up, they want to lean more into AI to do the work,” said Pam Lach, a digital-humanities librarian at San Diego State University, who remains ambivalent about AI’s impact on learning. “The best thing we can do for our students is teach them how to discern reliable information from misinformation. It’s getting harder and harder. And with the proliferation of gen AI, that’s accelerating.”
The question remains whether a critical number of faculty members will be willing to learn, and teach, about AI, even if they dislike it. Some say they will continue to avoid it because of the environmental, educational, and ethical dangers it represents. Their opposition is broader than whether a particular tool can be used in a particular way to make course design easier or help a student learn effectively. They see their role, in part, as fighting against deep-pocketed companies trying to force untested technologies on the general public.
ADVERTISEMENT
“Part of what those of us who are resisting are trying to do is to slow down that train … and to really push back hard on the narrative of inevitability,” said Josh Eyler, director of the Center for Excellence in Teaching and Learning at the University of Mississippi. “The landscape over all is rushing forward so fast and trying to embrace things like global AI literacy without thinking through the effects on cognition and on individual students.”
Eyler said he has yet to come across any substantive studies showing that AI can aid learning in ways superior to something that can be done without AI. While he agrees that teaching about AI is relevant in certain courses and disciplines, he doesn’t think everyone needs to be AI literate. His research on how people learn, he added, has probably made him more skeptical. “Learning is very hard work. It’s a deeply complex process,” he said. “If you offload onto AI the very cognitively demanding aspects of the learning process, then like a muscle atrophying, you’re weakening that process over time.”
He has debated AI-agnostic and AI-friendly colleagues on the topic, and ascribes their contrasting views to how they weigh costs and benefits differently. He knows people who work at teaching centers or as instructional designers who are supportive of AI use, so he does not think this is, in any way, a settled debate.
Others have been frustrated by the narrow ways in which many AI advocates talk about teaching, as if learning were simply a series of tasks to be accomplished and professors could use AI tools to streamline the process.
ADVERTISEMENT
At the University of Florida, which has declared that it is “building an AI university,” Trysh Travis has been part of the conversation about AI literacy, initially when she was an associate dean in the College of Arts and Sciences looking at how AI might be used for research, and now back on the faculty as an associate professor of women’s studies, where she is exploring its role in teaching.
Travis said that some AI advocates on campus don’t fully understand what’s different about teaching humanities. The staff who are trainers and instructional designers, she said, are “all really smart, and they’re really good people. They are just coming from a different orientation to the project than someone who was trained as a philosopher, someone who trained as a historian.”
When she tells them, “‘I don’t need to make a multiple-choice quiz. I don’t need to create a chatbot that will quiz it on the main characters in The Great Gatsby. I don’t want to turn The Great Gatsby into a case study. I want to have the students read, discuss, and write,’” she said, “They don’t quite know what to say.”
If she is skeptical of AI’s educational value, though, she’s more fearful of the humanities becoming irrelevant to the conversation. “Humanities faculty aren’t helping themselves by not getting on the bus for the big win,” she said. “But I also don’t blame them because the bus has been designed for people other than us. That’s a real tragedy because nobody knows more about what literacy actually means than we do.”
ADVERTISEMENT
Travis wants to rectify that, so she is working with UF’s AI² Center to create “a very modest guide” for humanities professors that offers low-stakes examples of AI use. If a professor can use it as a back-office tool, for example, “maybe they will hate it a little less.” That could open the door to using AI in other ways, she said, such as to augment or modify assignments.
Marc Watkins, assistant director of academic innovation at the University of Mississippi, tries in his work and writing to bridge the divide between those who loathe AI and those who want to use it. In a recent Chronicle essay he argued that resisting AI is both impractical and futile: AI is everywhere.
“So the question isn’t whether to participate in these systems — you already do — but rather, how to engage with them critically and intentionally,” he wrote. “Students deserve spaces where such inquiry is welcome. They deserve more than boilerplate policies, whether you’re an advocate of AI or an opponent.”
Watkins has been running AI workshops, engaging in debates and discussions with people like his colleague Eyler, and writing about AI on his Substack. AI literacy, he said in an interview, helps create a framework that moves beyond argumentation. It’s asking faculty members to do the same sort of critical thinking they expect of their students.
ADVERTISEMENT
“When we talk about AI literacy, what I’ve been telling my folks when I train them is, Get practical, think about how this tool works, how it functions. Think pedagogically how this would work in the classroom. Or if you would resist it, how you would go about that. … But also start thinking philosophically about what this is going to do to your skill sets.”
Faculty have responded well to that approach, he said. “Even people who are very resistant to AI like being able to talk about these things and think about things.”
Several multi-institution projects are also underway to help universities figure out how to respond to the AI challenge.
The AAC&U is running an institute on AI, pedagogy, and the curriculum that works with dozens of colleges in thinking through their AI strategy. Ithaka S+R started a new project this year to help college libraries support AI literacy. Some academics who refuse to use AI in their teaching see literacy as fundamental to explaining that refusal, and are forming networks and sharing resources.
ADVERTISEMENT
And researchers are continuing to explore generative AI’s effects on learning. In Virginia, UVA, George Mason University, and James Madison University are working together to help faculty conduct their own research. Among the topics they are exploring are whether AI can help students with disabilities, how to assess projects in which students submit a combination of their own work plus AI, and whether getting personalized feedback from AI can help in writing.
Travis, the University of Florida professor, hopes, too, that her colleagues in the humanities engage with the important questions about how AI is shaping the future. “In my more-optimistic moments,” she said, “I think of the monks sitting around in the scriptorium in the early 15th century, having a tankard of mead and saying, ‘Yeah that guy Gutenberg. That’s a flash in the pan. That’s nothing. What we do, that’s the real thing.’”
The printing press turned out to be a pretty good invention, helping usher in secular democracies even if it did destroy the monks’ way of life.
“So I think to myself, this feeling that I have, is it the result of being at a very weird moment of history, and things will move forward and something new I can’t understand from my historical location will happen?” she continued. “I would like to do what I can to ensure that it is a good thing, which means getting more smart people to understand how it works and think creatively about how it could be harnessed, rather than just being scared and depressed and angry about it.”
Beth McMurtrie is a senior writer for The Chronicle of Higher Education, where she focuses on the future of learning and technology’s influence on teaching. In addition to her reported stories, she is a co-author of the weekly Teaching newsletter about what works in and around the classroom. Email her at beth.mcmurtrie@chronicle.com and follow her on LinkedIn.