In January, Ondrea Wolf logged onto Blackboard Collaborate to give some 60 faculty members and staff at El Paso Community College a presentation on artificial intelligence.
Wolf, who directs the college’s Quality Enhancement Plan and assessment efforts, wanted to get colleagues excited about how they might incorporate AI tools into their work, giving them a taste of basic prompting language and strategies. A number of them, though, remained hesitant, Wolf recalled. They were still laser-focused on the well-trod question of whether there’s a reliable way to know if students are using AI to cheat.
This is the reality in higher education right now. In the 15 months since OpenAI released ChatGPT, generative AI — a type of artificial intelligence — has generated a mercurial mix of excitement, trepidation, and rebuff across all corners of academe.
Some instructors have embraced it, retooling their curricula to teach judicious, ethical use of this now-ubiquitous technology, which uses trained algorithms to produce text, images, video, and other media that can be nearly impossible to distinguish from human products. Some academics serving as peer reviewers are using generative AI software to identify gaps in research papers. College marketers and admissions officers on some campuses are tapping the tools as their editorial assistants. Others, though, have been steering clear, deeming the tech too confusing or problematic.
On one point, there is nearly unanimous agreement from sources The Chronicle spoke with for this article: Generative AI, or GenAI, has brought the field of artificial intelligence across an undefined yet critical threshold, and made AI accessible to the public in a way it wasn’t before. These technologies are now poised to shape broad swaths of the knowledge economy, and the wider work force.
But GenAI’s role in higher education over the long run remains an open question. The sector as a whole has yet to demonstrate that it can adapt and keep pace.
Proponents of the continued integration of GenAI point out that these technologies could be a lifeline for colleges. Institutions might use them to operate more efficiently as colleges are forced to do more with less. They might prove their value by training students for an economy that’s witnessing burgeoning employer demand for AI and GenAI skills.
Higher education and the field of artificial intelligence, though, are fundamentally mismatched in a number of ways. AI and GenAI technologies are maturing rapidly, while colleges are historically slow to evolve. Institutions have also traditionally tied much of their value to teaching critical thinking and problem solving — skills that, at face value, are not synonymous with AI, and that such technologies could even impede.
“What you’re seeing is a collision of two different ecosystems,” said Bryan Alexander, a scholar who writes about the future of higher education. And what higher ed stands to gain or lose from that collision, he said, is deeply nuanced.
Academics including Ethan Mollick, an associate professor at the University of Pennsylvania’s Wharton School who studies the effects of AI on education, see clear benefits. GenAI tools like ChatGPT-4 could make learning more equitable, he said, providing services such as personalized tutoring. Other academics, while not wholly opposed, are wary of unintended consequences. They noted that a lack of AI literacy and uneven adoption in the sector could instead worsen disparities in students’ learning experiences and widen knowledge gaps among graduates.
“Could this create another level of haves and have-nots? That would be the equity issue,” said Emelia Probasco, a senior fellow at Georgetown University’s Center for Security and Emerging Technology.
Data show that investments in AI at the institutional level have so far been highly concentrated. An analysis of current and archived job postings on The Chronicle’s jobs site found that the number of AI-related listings more than doubled between 2022 and 2023. (The jobs site as a whole saw a 57-percent increase in listings during that time, comparatively.) And in 2023 alone, just five institutions — Northeastern University, Carnegie Mellon University, the University of Pennsylvania, Clemson University, and the University of Florida — accounted for nearly half of all AI-related postings.
Whether higher ed can keep pace or not, experts said these technologies should be in administrators’ and faculty members’ lines of vision. Colleges have a responsibility to students, they said, to be tuned into conversations around artificial intelligence and GenAI — particularly if higher ed is to wield any influence as Congress mulls broader regulations around responsible use.
Colleges can’t “deal with this later,” Wolf said. “There is no later. That later is actually tomorrow.”
For a lot of people in academe, artificial intelligence may feel new. Futuristic. But the field of AI is actually older than many modern-day innovations, including mobile phones and the internet.
“Artificial intelligence” was reportedly coined nearly 70 years ago for a summer workshop at Dartmouth College in 1956. The field enjoyed periods of considerable buzz, including in the 1980s and late 1990s, with the proliferation of computer systems that attempted to mimic human decision making, and the defeat of the reigning world champion in chess to a machine, respectively.
But “AI winters” would follow the hype. Inflated expectations about AI’s capabilities, and subsequent disillusionment with its limitations and weaknesses, often led to “throwing the baby out with the bath water,” said Eric Siegel, a machine-learning consultant and author of The AI Playbook (MIT Press, 2024).
To be sure, certain applications for artificial intelligence have developed incrementally over the years, without as much media ballyhoo. The best-known, perhaps, is colleges’ use of predictive analytics, a process that leans on machine learning, to forecast things like prospective students’ likelihood of enrolling, and current students’ likelihood of graduating. Generative AI, too, has progressed over decades, reaching an important milestone — the ability for a model to understand the contextual relationships between words in a sentence — in 2017.
Many faculty members and students may find that they’ve used AI-powered tools for years without realizing it: adaptive-learning platforms, writing assistants like Grammarly, and chatbots, to name a few.
A past weakness of GenAI models that prevented more extensive use in higher education, Mollick said, was their constrained language capabilities. The forerunner to 2022’s ChatGPT, for example, could write a decent fifth-grade-level essay, he said. Not something most folks in higher ed would find useful.
That changed in late 2022, with the release of ChatGPT-3.5. “These models became unaccountably good” compared to their predecessors, Mollick said.
Suddenly, more applications for college students and employees emerged. Those peer reviewers evaluating research papers? Some now use GenAI tools to get reader-friendly synopses of confusing chunks of text, or to identify existing research that’s been overlooked. Those college marketers working to engage prospective new students? Many now have a pseudo-assistant to draft their email and text campaigns, enabling consistent formatting and a uniform voice.
GenAI’s accessibility — a characteristic that has stymied large-scale adoption of tech advancements like virtual reality — has been another game-changer, experts said. The technology feels more human. One doesn’t have to be a computer-science fanatic, or a programmer, to engage with it.
That has created a wave of excitement and consternation. An essay about students’ use of ChatGPT was the most-read Chronicle article in 2023. And it’s not a passing fad. Nearly everyone The Chronicle spoke to for this article was confident of GenAI’s staying power in higher education. Industry watchers like Gartner, a technological research and consulting company, agree, predicting in a July 2023 report that generative AI technologies would become more mainstream in the higher-ed sector in two to five years.
“We looked across sectors, and higher education by far and away had the most active board-level discussions about generative AI,” said Tony Sheehan, a vice president and an analyst within Gartner’s education research and advisory team. “Why? Because it’s disruptive to the sector. It’s challenging to the sector.”
Is higher ed up to the challenge?
To its credit, the sector has faced real tests of its flexibility and resilience before — most recently and perhaps most remarkably when the Covid-19 pandemic forced colleges to pivot to remote learning.
Still, higher ed is known for being deliberative, building slow consensus, and operating in silos. (Two experts The Chronicle spoke with for this article affectionately compared trying to work quickly and collaboratively to “herding cats.”) And that culture, with some exceptions, has characterized the sector’s handling of GenAI.
The clearest example pertains to guidance and policies. Some institutions, such as Drexel University, have created task forces to develop academic-integrity policies and recommend to leadership how to stay on top of GenAI advances. Others say they are in an “exploratory” phase. Many, including Miami University and Purdue University, have left decisions on the use of GenAI tools to individual faculty members for now.
Why the mixed responses? On a practical level, many administrators don’t want to ink policies only to find they must update them again and again. Some sources pointed out, too, that the uses of GenAI, and AI more broadly, vary so much by subject matter and department that universitywide policies risk being either too restrictive or too burdensome.
As a faculty member in a theater department, for example, Tracy Miller doesn’t consider GenAI tools to be particularly relevant. Her field places high value on “people being in the room collaborating, and interpersonal communication,” said Miller, an assistant teaching professor at the University of Rhode Island. “It’s just a different way of thinking, and a different skill set.”
Other barriers speak to the heart of the enterprise: philosophical questions about what GenAI means for how colleges educate their students and for professors’ academic freedom. Many resist a development that, to them, feels counter to the spirit of academe.
That resistance goes beyond worries that students might use GenAI tools to cheat on assignments. Alexander, the futurist, said that especially in writing-heavy fields such as the humanities, a fear is that GenAI “teaches us that writing is transactional. That knowledge is something that is quick to obtain, and easy to dispense with. That it should be friction-free.”
Academe, conversely, has traditionally been “about exploring questions over time” in “an iterative, sustained process,” he said.
Running parallel to these concerns is an uncomfortable recognition, too, that these technologies may shift the skill sets of some jobs and render others obsolete.
Such reactions, of course, aren’t new. Advances in technology usually spark fear of disruption. A Newsweek article from 1980 described the nation’s schools as having been “invaded” by pocket calculators; a 1986 piece from The Christian Science Monitor quoted a teacher who’d decried the calculator’s “tremendous potential for destruction’’ of student math skills. And a Columbia University professor, Eli Noam, published a paper in Science in 1995 — the early days of the internet — warning that “new communications technologies” would “weaken the traditional major institutions of learning, the universities.”
“Instead of prospering with the new tools,” he wrote, “many of the traditional functions of universities will be superseded … their role in intellectual inquiry reduced. This is not a cheerful scenario for higher education.”
Higher ed certainly adapted to the calculator. And it did evolve alongside the internet, though that technology has challenged colleges’ role in intellectual inquiry — by some accounts, lamentably. GenAI’s rapid development is sure to complicate that role even further, in ways that are difficult to predict.
Asked to read the tea leaves, some sources said they see GenAI tools being used only in specific instances for specific purposes (as is often the case now), but not requiring any fundamental shifts in how institutions function. Others, like Paul LeBlanc, the soon-departing president of Southern New Hampshire University, are envisioning something else: a complete metamorphosis.
LeBlanc’s next career move involves leading a small team that will explore big questions, like: “What are the ways that AI would enable us to rethink learning to make it more effective? To serve more people?” (The transition feels apt for an academic who’s previously helped redefine what a college can look like, having overseen the genesis of one of the nation’s largest online institutions.)
Over Zoom, LeBlanc eagerly screenshared a PowerPoint slide with high-level examples of what such “rethinking” might look like. Valuing “what you can do” instead of “what you know,” for example. Assessing “understanding and application” instead of “knowing and repeating.” Defining learning as a “24/7” endeavor, versus one tied to academic years and semesters.
Making these kinds of changes, LeBlanc acknowledged, would be an arduous process. But one that he believes could be worthwhile.
“It will be slow,” LeBlanc said, “but we’ll learn a lot along the way.”
LeBlanc’s palpable optimism is not unique. Many believe, like Mollick, that tech such as ChatGPT-4, which costs $20 a month (or can be accessed at no cost via Microsoft Copilot), can help level the playing field, providing a repository of resources and knowledge to any student. At any time of day, students can generate practice questions and answers for studying. They can use these tools as sparring partners to get feedback on their work. All without needing technical skills, like coding.
“What ed-tech has ever been at that level of spending for students?” Mollick asked. “...This is the biggest equity opportunity we’ve ever had.”
There could also be unique benefits to students who require accommodations for learning, Kelly Hermann, vice president for accessibility, equity, and inclusion at the University of Phoenix, told The Chronicle at Educause’s annual conference last October. For example, a GenAI tool that can facilitate speech-to-text — or text-to-speech — communication may be helpful for learners with motor-deficiencies who may struggle with typing, or who have speech impairments, respectively.
“I would love to see how accessibility can actually enhance creativity and innovation” in fields like AI, she said.
There’s a broader chain reaction happening, too, with creativity around GenAI inspiring other uses of AI at colleges. Some admissions offices, for example, told The Chronicle last fall that they were thinking about how to use AI to be more resourceful and efficient. One staffer at the University of Oregon was interested in a machine-learning model that flags high schools “in which we’ve had any statistically significant” anomalies, like high yield, to better allocate time and energy for recruitment. Another at the University of California at Irvine was researching an AI software that could ease the institution’s burdensome transfer process by comparing its course catalog against unfamiliar transfer courses to find the closest match for credit — a solution to a problem that notoriously keeps students at many colleges from earning bachelor’s degrees.
Still, numerous sources worry that the vast discrepancies in how colleges are handling guidance, access, and literacy training for GenAI tools could simultaneously breed further inequity in education.
Without at least establishing “basic truths” at the institutional level about what uses are and aren’t appropriate, colleges risk creating “a vastly different educational experience” for different students, said Kofi Nyarko, director of the Center for Equitable AI and Machine Learning Systems, at Morgan State University.
Nyarko recalled a student whose instructor relied on AI-detection tools, which are notoriously inaccurate, to review assignments. That instructor incorrectly claimed that the student’s dissertation was AI-generated, causing stress that the student likely wouldn’t have experienced with a faculty member who’d opted against such products.
Julia Lang, a professor of practice and associate director of career education and life design at Tulane University, said many students she’s spoken to are “getting really mixed messages” about where the line is when it comes to using tools like ChatGPT — and, of course, may disagree about where it should be. Large swaths of faculty and staff members have been feeling unequipped, too. Without clear policies and guidance, they have resorted to teaching themselves: consuming podcasts; attending virtual seminars during their lunch breaks; creating campus-support groups; and sharing resources — like this collection of existing college AI policies — on online discussion boards.
Miller, at the University of Rhode Island, said many faculty members, herself included, have already been “reinventing the wheel” every semester since the pandemic began. It can feel exasperating, then, when administrators expect them to independently develop new skills, such as GenAI expertise. All with a stagnant paycheck.
Crafting some form of institutional policy would also be a smart defense against liability, said Wolf, at El Paso Community College. Unless otherwise stated, or unless a user proactively opts out, GenAI tools like ChatGPT are a two-way street: Whatever one puts in can become a part of the model, as training data. That should raise concerns, she said, about things like student data privacy. (There are also proven risks of copyright infringement, and violations of intellectual property.)
Guidance and policies aside, whether college employees have access to GenAI tools, and resources to innovate, is another equity consideration.
At least for now, innovation around AI is happening in pockets. (It almost always does, to be fair.) A smattering of institutions, including New York University’s McSilver Institute for Poverty Policy and Research, have hired chief AI officers. The University of Florida has christened an Artificial Intelligence Academic Initiative Center to drive the development of AI academic programs and certificates. California State University at Sacramento in December unveiled the National Institute for Artificial Intelligence in Education to train current and future teachers on ethical AI use. Arizona State University recently announced it is extending free access to ChatGPT-4 to approved university members. The University of Wisconsin at Madison plans to hire up to 50 new faculty members in AI as soon as this spring.
Sheehan, at Gartner, noted that the scope and type of institutional investments in AI shouldn’t be compared apples-to-apples, though, as colleges’ missions and strategic objectives often differ.
There are “opportunities for everyone, and different flavors of innovation,” Sheehan said. A top research university, for example, might be inclined to prioritize AI and GenAI research. A community college might be more focused on preparing students for the future job market. (Forrester, a research and advisory company, estimates that GenAI will reshape more than 11 million jobs by 2030.)
“They both want student success,” Sheehan continued, “but they mean that in very different ways.”
Take Dakota State University. It has maintained a “special focus” on STEM and information technology since the 1980s — spurred in part by the rising local demand for computer programmers after the corporate giant Citibank set up regional headquarters in Sioux Falls, S.D., in 1981. Because of this tech-centered mission, President José-Marie Griffiths said, the institution homed in on artificial intelligence well before the arrival of ChatGPT. It hired its first faculty member for AI in 2016.
“We lean into every technology because that’s our role: to prepare our students not for today’s technology … but for tomorrow’s technology,” said Griffiths, who also previously served on the National Security Commission on Artificial Intelligence.
Griffiths and those like Lang at Tulane underscored that providing physical access to products like GPT-4 is just one step; colleges must also teach AI literacy and best practices. How well employees and students understand what GenAI tools are, their propensity for biases and “hallucinations,” and how to effectively design prompts will determine whether the tools are a help or a hindrance to learning.
So whose responsibility is it, ultimately, to prevent a scenario of haves versus have-nots? Especially amid a flurry of news about program cuts, deficits, campus closures, and budget-mismanagement crises, it can be hard to picture many institutions loosening the purse strings in the name of AI innovation.
Experts pointed to the federal government and its agencies, some of which are providing a growing number of grants and resources to institutions. The National Science Foundation, for one, has awarded 25 “AI Institute” research grants since 2020 — most of them above $1 million — and recently announced a multi-agency, cross-sector initiative, the National Artificial Intelligence Research Resource pilot program, to help “democratize access” to research, software, training, and other supports.
Those like Morgan State’s Nyarko and Georgetown’s Probasco said institutions aren’t off the hook, though. While investments may differ by institutional need, colleges — at a minimum — need to make a priority of training employees and students on AI literacy and ethical use.
“Institutions have to look at where they’re putting their time, and their money, and their focus, and ask the question: Am I preparing my students for a time and period that has passed, or am I preparing them for the future?” Nyarko said. “Everybody has a role to play.”
Experts added that institutions and their leaders should be tuned in not only to developments on their own campuses, but at the state and federal levels, too.
Congress is looking to advance bills that address risks such as bias and loss of control over personal data, hoping to not drop the ball with regulations in the same way that it did with social-media platforms in the late 2000s and early 2010s. And while there is a mix of hope and skepticism about the timing and breadth of any such legislation, there’s common agreement that higher ed needs to be at the table.
AI “is this complex, new thing to a lot of people,” said Joseph Hoefer, a principal at Monument Advocacy who leads the company’s AI working group. “You need academics in the room to help explain it at all the different levels,” and to help ensure that — despite the breakneck speed of AI and GenAI development — regulatory decisions are “deliberate.”
When Congress held nine forums last fall on artificial intelligence, 21 of the 165 total attendees, or 13 percent, were from academe, according to the Tech Policy Press’s tracker. Nearly triple that share, 35 percent, were representatives of the tech industry and/or venture capital, including Meta, Google, Microsoft, and OpenAI.
“The conversation that’s happening right now is certainly dominated by industry,” Probasco said. “I think they’re doing it because they genuinely want Congress to know how this stuff works so they don’t make bad policy. But certainly there are other motivations — and not all actors are good.”
Alexander, the futurist, said institutions could wield considerable clout if they banded together on certain agenda items. He pointed to one historical example: When Google approached college libraries in the early 2000s asking to digitize their repositories for a scanning project, the libraries made it part of the deal that they would get to keep their own digital copies. That material would end up feeding the HathiTrust Digital Library.
“The history of academic collaboration is one that’s pretty bad,” Alexander said. But he added, “if we can get our act together” and tell big-tech companies, “‘We need X, Y, and Z to make these tools work for us, and we really don’t want to have A, B, and C,’ maybe we could actually influence these tools for the better.”
There’s no question: Higher education has a tough job ahead. Sure, GenAI may not wholly upend higher education as we know it. But it will challenge it, and push colleges, on some level, to rethink what they value and how they can best serve their students. Waiting for this moment to pass is not an option.