Today’s college students can expect to be professionally active into the 2070s. Their careers will span an era in which artificial intelligence will have profound and transformative societal impacts. And they, as well as the generations that follow them, will have the responsibility of making the key choices shaping the future of AI. Colleges need to ask themselves what they can do today to prepare students to help build a future in which the power of this extraordinary technology is used in maximally beneficial ways.
We're sorry. Something went wrong.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network.
Please allow access to our site, and then refresh this page.
You may then be asked to log in, create an account if you don't already have one,
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com
Today’s college students can expect to be professionally active into the 2070s. Their careers will span an era in which artificial intelligence will have profound and transformative societal impacts. And they, as well as the generations that follow them, will have the responsibility of making the key choices shaping the future of AI. Colleges need to ask themselves what they can do today to prepare students to help build a future in which the power of this extraordinary technology is used in maximally beneficial ways.
While artificial intelligence isn’t new — in 1950, Alan Turing published a paper asking “Can machines think?” — recent years have seen extraordinary growth in AI investment and commercial impact. In the future, AI will be the driver for a long list of benefits, including improved crop yields, safer roads, more-effective medications, new tools to help identify and stop the spread of diseases, more-accurate forecasting of dangerous weather, streamlined transportation systems, and improved access to education.
But AI also brings risks. It can amplify the racial, gender, and other biases that are so deeply interwoven with much of the data that AI systems use as input. It can also be used by malicious actors to create “deepfake” videos aimed at influencing an election. More generally, because AI algorithms evolve on their own, they can behave in ways — including potentially problematic ways — that their human designers may not have anticipated.
ADVERTISEMENT
While AI is a technology, we can’t expect technologists alone to identify its opportunities and face its challenges. Getting the most benefit from AI will require contributions from people trained in a wide range of academic disciplines.
We need philosophers, lawyers, and ethicists to help navigate the complex questions that will arise as we give machines more power to make decisions. We need political scientists to help us understand the geopolitical implications of AI, urban planners who can explore opportunities to bring it into smart-cities initiatives, economists to help us understand how it will change labor and manufacturing, and public-policy experts to formulate the frameworks that will create incentives for its positive uses and mitigate its negative consequences.
We also need scientists to explore how AI can provide improved climate models so we can better understand and fight climate change, and we need physicians and public-health experts to examine how AI can help expand the reach of medical care and reduce the spread and impact of disease.
To accomplish all of these things and more, colleges need to be full participants in preparing students to contribute to the growth of a beneficial AI ecosystem. This doesn’t mean converting colleges into institutions that see everything through the lens of AI. And it doesn’t mean that we should require every student to take a series of courses on how to code machine-learning algorithms. But it means colleges should offer all students, regardless of field of study, opportunities to learn about AI in a manner contextualized for their disciplines and interests.
In recent years, engineering schools have ramped up their AI course offerings. That’s a good start, but it isn’t enough. Colleges need to provide courses that focus on this technology more broadly.
ADVERTISEMENT
One way to do this is through interdisciplinary courses, either at the graduate or undergraduate level, that focus on the intersection of AI with other fields.
In addition to classes focused on AI, a growing number of examples of more broadly focused courses incorporate AI-related content as a subtopic. Among them: A University of Michigan at Ann Arbor undergraduate philosophy course titled “Minds and Machines” considers “minds, machines,” and “the relationships between them,” including the “promises and perils of artificial intelligence.” A Stanford course in economics (and also cross-listed with multiple other departments) on “The Future of Finance” considers, among other topics, the use of AI in lending, investing, and algorithmic trading.
Much of the growth in AI-related course content is occurring organically as instructors propose new courses and update existing curricula in response to the increased visibility of AI in both academic and broader public discourse. As valuable as this is, it will inevitably leave some gaps. Thus, faculty leaders and administrators also play an important role, as advocates for rethinking curricula in anticipation of what is certain to be a surge in student interest in opportunities to learn about AI.
ADVERTISEMENT
It is no easier to predict exactly how AI will evolve over the next half century than it would have been to predict the post-2000 rise of social media back in the 1960s. What is certain, however, is that artificial intelligence will be one of the defining technologies of the 21st century. To help promote the productive growth of AI and to mitigate its risks, colleges should provide students with opportunities to engage with its technological, legal, ethical, economic, social, and political implications.
John Villasenor is a professor of electrical engineering, law, and public policy at the University of California at Los Angeles. He is also the co-director of the UCLA Institute for Technology, Law, and Policy, and a nonresident senior fellow at the Brookings Institution.