We have reached a moment of reckoning about what artificial intelligence means for the human experience. This is a moment of reckoning, too, for higher education. It’s not enough for colleges merely to transfer knowledge and skills to AI’s future programmers and stewards. Colleges have a pivotal role to play in preparing all students for life with AI, and advancing human well-being in a digital world.
It is no exaggeration to say that recent advances in artificial intelligence have brought us to a turning point in human history. After millennia of living within and adapting to the physical world, humans — particularly in developed economies — now occupy a digital world that is just as complex and multifaceted as the physical one. Much of this recent change is attributable to AI.
Over the past few years ChatGPT and other large language models have dominated the headlines, both for their astounding skill at replicating human language and their mistakes and ethical pitfalls. But large language models only scratch the surface of how AI is interwoven in our lives. From industrial AI to augmented reality, virtual reality, and the internet of things, there is scarcely a domain of human experience to which AI is not being applied. We have a limited opportunity to figure out how to harness AI’s possibilities for the better and manage it effectively — instead of waiting for AI to manage us.
Though the analogy isn’t perfect, it’s useful to compare the rise of AI to the rise of the automobile in the early 20th century. For good or ill, cars reshaped virtually every aspect of existence in modernized societies. They connected communities, turbocharged economies, and expanded possibilities for people to live and work. But they also introduced a host of new physical dangers and environmental risks.
AI has the potential to transform our world to an equal degree, and at this moment we have the opportunity to anticipate those changes and brace for their potential impacts. Imagine if, at the outset of the automobile era, we had predicted a rise in carbon consumption and planned ahead for offsets. Or if we had proactively designed highways and roadways to minimize the disruption of neighborhoods. There is still time to do this for AI. But the window won’t be open for long.
The most straightforward way to construct a comprehensive accounting of the human experience of AI is to consider it with reference to three different aspects of our selves: the physical, the cognitive, and the social selves. AI is changing our experience in all of these categories.
For example, an examination of AI with respect to our physical selves reveals some promising developments, but also some cautions. AI is now being used extensively in medicine. It can identify the presence of maladies a doctor may sometimes miss, such as breast cancer, skin cancer, and diabetic retinopathy. AI systems are accelerating drug discovery and personalized medicine by leveraging their capacity to analyze the human genome, and could lead to new treatments for diseases such as Parkinson’s, Alzheimer’s, and ALS.
AI has the potential to transform our world, and at this moment we have the opportunity to anticipate those changes and brace for their potential impacts.
At the same time, AI-generated medical diagnoses and advice can be wildly inaccurate, such as in the case of a chatbot promoting dieting to a person with an eating disorder. More fundamentally, a recent study by the American Psychological Association found that people who use AI frequently are more prone to sleeplessness, loneliness, and problem drinking.
AI also has mixed effects on our cognitive selves. While there is well-justified anxiety over AI’s impact on jobs and professions, the upside is that, when used appropriately, AI tools can drive greater productivity in the workplace, freeing employees to focus on more complex and strategic work instead of routine tasks. Industrial AI applications such as “digital twins” — virtual representations of physical systems in, say, a plant or factory — can help increase our analytical abilities, moving us from reactive problem-solving to predictive analysis.
On the other hand, some observers worry that our critical-thinking skills will diminish if we come to rely on AI-generated analysis without proper scrutiny. Others go even further, noting that the process of learning requires us to acquire baseline knowledge before we can proceed to higher-level analysis. If AI is doing the basic work, they argue, we won’t learn the basic skills.
Now consider our social selves. In the past decade, AI-enabled virtual reality has had a positive impact on our social lives by enabling us to communicate over long distances in a novel environment, almost as though we are face-to-face. Studies have found that people deem VR environments to be more genuine, and to create a better sense of presence, than other modes of long-distance communication such as Zoom or other conventional forms of video conferencing. VR programs employing digital avatars have been used to help people with autism better recognize facial expressions, body language, and emotions from a person’s voice, aiding the development of social skills. At the same time, AI-powered social media has had colossal negative impacts on mental health, exacerbating problems with self-esteem, isolation, and bullying. Moreover, AI makes it terrifyingly easy to propagate misinformation and disinformation, rending the social fabric and threatening democracy.
Stepping back still further, it’s worth considering how AI affects humankind on a macro level, broadly affecting food and energy consumption, the environment, social stability, and our species’ very existence. Here, the early track record of AI presents a decidedly mixed bag. For instance, AI is driving agricultural innovation by helping farmers optimize irrigation and fertilization, yielding better crop outputs. On the other hand, AI uses an enormous amount of energy: According to the International Energy Agency, it accounted for between 1 and 1.5 percent of worldwide energy use in 2022, and that figure will double in two years.
Policymakers are grappling with how to regulate AI. Governmental approaches include the European Union’s focus on data privacy and intellectual-property implications and the United States’ focus on arenas where AI arguably should be prohibited, such as autonomous warfare. And of course, we all have heard the fears that AI could spell the end of the human race itself. A recent report commissioned by the U.S. State Department flatly stated that the misuse of AI by a nation or rogue actors “could pose an extinction-level threat to the human species.”
It is high time, then, to contemplate systematically what it means for humanity to exist in an AI-driven world. Such an accounting must do more than merely tally up the pros and cons of these new technologies. It should offer a road map for how we can maximize their positive aspects and minimize the negative ones for the benefit of humankind. To my mind, there is one societal institution that is ideally positioned to provide this kind of accounting: the university.
There are many reasons for this. First and most obvious, colleges are one of the few institutions with the multidisciplinary expertise to construct a comprehensive accounting of AI, which will require contributions from fields as diverse as engineering, ethics, and sociology. Second, colleges’ practice of subjecting ideas, hypotheses, and findings to rigorous questioning and challenge supports the development of a systematic analysis of AI that is deeper, more substantive, and more accurate than popular discourse can provide. Unlike other parts of society where such development could also take place — such as within the many private-sector companies that are actively and rapidly developing AI tools — colleges are honest brokers, less subject to distorting forces such as competitive pressures and profit motives.
It is high time to contemplate systematically what it means for humanity to exist in an AI-driven world.
But perhaps colleges’ greatest asset is that they are the port of arrival for the segment of society that will be most affected by present and future AI developments: students. It’s been widely noted that the cohort of undergraduates who have matriculated over the course of the past decade have been digital natives. But the students of today — and of the future — are AI natives. Understanding how they distinctively interact with, experience, and feel the consequences of AI will be crucial to understanding the technology’s long-term implications.
In my previous writings on AI, I argued that higher education needs to offer an educational framework to help students meet and master the opportunities and challenges of the AI age. Specifically, I called for a framework called “humanics.” This consists of, first, a baseline education in what I term the “New Literacies”: (1) technological literacy, or a basic education in the functioning of technologies such as AI; (2) data literacy, or how to understand and interpret the data emanating from such technologies; and (3) human literacy, the development of uniquely human attributes, such as creativity, critical thinking, entrepreneurship, and cultural agility, that continue to differentiate us from advanced machines.
In addition, I argued that students should immerse themselves in diverse experiential-learning opportunities outside the classroom to develop their uniquely human skills. Lastly, I proposed that colleges do more to provide robust lifelong learning opportunities as technology continues to reshape the workplace and the world, giving rise to the need for people to learn these new technologies, sharpen existing skills, and cultivate new ones.
I still believe this is a sound overall prescription. However, just as AI has evolved from basic versions to more sophisticated iterations, its expansion necessitates a Humanics 2.0. Here are its key evolutions:
From foundational to multifaceted AI education. Now that the digital world has come into full bloom, a baseline education in how technologies like AI work, and how to interpret the data that emanates from them, is not enough. Since AI has extended its tendrils into nearly all facets of life, an education in it must be similarly comprehensive, providing a lingua franca that learners can apply across all of AI’s manifestations.
This should be panoramic, offering not just an understanding of how AI is affecting our economy, but also our institutions, and our future as a species. At the same time, it also should be personal so students learn to recognize AI’s fingerprints in their daily lives: in their personal finances and health care, in their homes and transportation, in their social-media feeds, and in the apps that recommend what to buy, whom to date, and what to believe.
This core education can be coupled with innovations at the interdisciplinary level. For example, colleges should incorporate instruction into how AI is transforming each subject of study so students’ learning is kept up-to-date in this fast-moving milieu. One way to do it is through combined majors that weave together disciplines with the thread of AI — for example, bridging computer science and theater. This develops a depth of knowledge while simultaneously exploring how technology may be changing subjects of study, challenging accepted shibboleths, or creating new opportunities.
Leveraging experiential learning to use AI, as well as surmount it. My initial formulation of humanics argued that experiential learning extends the benefits of classroom-based instruction. For instance, a co-op or long-term internship at a professional workplace not only gives learners a practical setting to apply subject-specific learning, it also offers unexpected, unpredictable, and serendipitous moments for flexing critical-thinking, problem-solving, creative, cultural, and social skills.
In this respect, experiential learning allows human-centered talents to blossom, expanding the set of attributes learners have to distinguish themselves, and stay ahead of, AI. It also fosters their ability to engage in what cognitive researchers call “far transfer": the insightful application of learning to a vastly disparate situation or context, which is a unique hallmark that differentiates human from artificial intelligence.
These remain worthy goals. They should also be coupled with efforts to leverage experiential-learning settings as venues for learners to use AI, not just surmount it. Professional workplaces and other real-world settings are among the first places to experience AI, feel its impact, and adapt to new technologies. “AI won’t steal your job, but someone who works with AI will” is a cliché, but that doesn’t make it untrue. Students should therefore best learn to use it.
Using lifelong learning for reinvention, not just to acquire new skills. Similarly, we need to broaden the lens through which we see lifelong learning as an answer to AI. Over the last decade, interest in these programs has grown significantly. The pace of change in professions, sectors, and society itself has accelerated, requiring people to refresh their learning at regular intervals so they remain in tune with the world and economically viable. The newest AI advances add jet fuel to this proposition. Since people live in a rapidly changing digital world that is as meaningful and consequential as the physical and natural worlds, colleges must do more than meet a tactical need for acquiring new skills through lifelong learning. They will need to prepare people for true reinvention.
The students of today — and of the future — are AI natives.
Readying learners for reinvention means moving beyond the notion that a university should help learners succeed in a single lifelong vocation, to the idea that the university should prepare learners for a multiplicity of changing roles throughout life, including ones that would seem unfathomable to their undergraduate selves. At my university, we’ve created ALIGN, a pathway for students with undergraduate backgrounds in the humanities or basic sciences to pursue careers in technology. Through it, they earn a master’s degree in AI and computer science while gaining up to a year of experience in the tech sector. The goal is not just to create career opportunities for individual students, but to ensure that people from a variety of backgrounds are in positions to develop and steer emerging technology.
Nearly two centuries ago, Cardinal John Henry Newman argued that the primary role of the university is to provide people with a comprehensive understanding of the world for its own sake: “Knowledge is capable of being its own end.” Updating this view for today, we can say that this now encompasses understanding the digital world comprehensively. Meanwhile, in the last century, John Dewey argued that education should be practical, pragmatic, and grounded in the “intelligent exploration and exploitation of the potentialities inherent in experience.” Updating this view for today, we can say that an education can no longer be fully practical if it does not prepare students to reinvent themselves for the relentless shifts of the digital world.
A full accounting of AI would seek to combine both impulses, to make both intellectual and practical sense of the digital domain. AI is our reality now: All of us on this planet, young and old, will be dealing with the ramifications of this technology for the rest of our lives. How this will affect our fortunes, our convictions, and our future as a species cannot be known fully. But by offering a framework for understanding and navigating the age of AI, colleges can put us on track to meet that future with confidence.