A new survey of college leaders shows deep concerns and some stark divisions about how well they believe their institutions are preparing students and faculty members to use generative AI. Senior administrators are also divided as to whether the impact of the technology on their colleges will be more positive than negative. The vast majority believe it could enhance learning, but virtually all are worried about its impact on academic integrity.
“Leading Through Disruption: Higher Education Executives Assess AI’s Impacts on Teaching and Learning,” released today, was conducted by the American Association of Colleges and Universities and Elon University’s Imagining the Digital Future Center. More than 330 university presidents, chancellors, provosts, and other senior leaders were surveyed late last year, although the report noted the survey was nonscientific and not generalizable.
The responses collectively painted a picture of a sector not yet ready to handle the changes that generative AI is bringing to higher education and society at large. Only 43 percent of college leaders said their institution was ready to effectively use generative AI to prepare students for the future. The same percentage responded positively when asked if they were preparing faculty for teaching and mentoring with generative AI. When asked about challenges to adopting AI in courses, 93 percent cited faculty unfamiliarity with generative AI, and 84 percent cited lack of training and support.
In a foreword to the report, Lynn Pasquerella, president of AAC&U, emphasized the need for higher education to take stronger action to address these challenges. “Senior leaders,” she wrote, “must actively investigate and seek to comprehend the risks and rewards of AI, commit to ongoing professional development for faculty and staff around AI, create new jobs and allocate budget lines to support AI implementation, test and validate new AI processes, and infuse AI literacy and ethics throughout the curriculum from first-year experiences to senior seminars and from general education to the major.”
Respondents expressed a range of views on the overall impact of AI on their campuses. Forty-five percent thought it would be more positive than negative, 27 percent thought it would be equally positive and negative, 17 percent thought it would be negative, and 10 percent said they didn’t know.
This uncertainty is not surprising, necessarily, but it illustrates how slow higher education has been in addressing AI systematically. ChatGPT was released more than two years ago, yet just 69 percent of campus leaders reported that their college had policies that address appropriate and inappropriate uses of generative-AI tools in learning and teaching. Sixty-three percent have some sort of task force to manage AI implementation across campus. And 44 percent said they had new classes that focus on AI. Just one-fifth offer majors or minors in AI.
Only 10 percent of respondents said their institution is spending a lot more this academic year on people, hardware, and software to integrate generative AI across campus. Fifty-three percent said they are spending a little more.
Institutions with more than 10,000 students generally expressed more confidence: 27 percent said they were far ahead or above average in using AI tools, compared with 17 percent of those who work at colleges with less than 3,000 students.
The responses of senior leaders mirror, in part, findings of a survey last June by Ithaka S+R , in which instructors were asked about their AI use. Two-thirds said they were at least somewhat familiar with generative-AI tools. But only 32 percent reported that they were confident in their ability to apply AI in their teaching. And 67 percent either disagreed or were neutral on whether AI would benefit teaching in their field.
‘An Inflection Point’
The report’s authors, C. Edward Watson, vice president for digital innovation at AAC&U, and Lee Rainie, director of the Imagining the Digital Future Center at Elon, said the mixed views higher-education leaders have about AI’s impact is striking.
“This is really an inflection point,” said Rainie, who is the former director of the Pew Research Center’s team studying the internet and technology. “What we see in these data is there are people who are gung-ho to move forward, and there are people who are terrified, who don’t know how to move forward or don’t want to move forward. That makes it sort of hard to figure out, as a broad class of institutions, where the future lies.”
Rainie noted, too, that leaders felt AI holds both great potential and great dangers. More than 90 percent said it could enhance and customize learning, for example. To a lesser degree, they thought it could be used to improve students’ research skills and increase their ability to write clearly and persuasively. At the same time, 95 percent felt it increased concerns about academic integrity. They also worried about students overrelying on the tools and that AI could widen digital inequities.
The vast majority of leaders agreed that students should be taught about the problems that generative AI creates, including privacy issues around personal data, inaccurate statements, and the use of generative AI to intentionally promote disinformation.
Among the positive effects AI could have, a majority of respondents said they think the quality of assignments will get better, that AI tools will alleviate some of the routine work faculty do, and that the technology will help in academic research.
Most college leaders also felt there were acceptable ways for students to use AI tools. For instance, 88 percent said it was OK for students to use them to brainstorm and refine project ideas. Eighty-one percent said it’s fine for students to use AI to improve their writing by fact-checking claims, for example. But they were less comfortable with more involved forms of using AI: Only 51 percent, for example, said it was acceptable to ask AI for a detailed outline for a writing assignment and then follow that outline.
Watson sees some promise in the fact that, given how slowly curricular reform happens, 14 percent of institutions said that they have adopted AI literacy as either an institutional or a general-education learning outcome. Among institutions with more than 10,000 students, that figure is one in five. He speculates that AI literacy may become an essential learning outcome in the way that critical thinking is.
“It’ll be interesting to see such data a year from now or two years from now if we’re at 40 percent, or more than half of institutions in the U.S. have made that move,” he said. “But this is quick movement in only a two-year span.”