These and other questions led researchers at Elon University to ask hundreds of experts — academics, business leaders, policy makers, researchers, and others — for their thoughts on how artificial intelligence will shape us over the next 10 years.
Their thoughts are not exactly reassuring.
In the report, Being Human in 2035: How Are We Changing in the Age of AI?, by Janna Anderson and Lee Rainie of Elon’s Imagining the Digital Future Center, experts predicted that of 12 core human traits or capacities, AI will have a negative effect on nine.
They are:
- social and emotional intelligence
- capacity and willingness to think deeply about complex concepts
- trust in widely shared norms and values
- confidence in their native abilities
- empathy and application of moral judgment
- mental well-being
- sense of agency
- sense of identity and purpose
- metacognition
These are many of the same traits, of course, that teaching experts say students need to learn effectively. So what will AI mean for teaching? I called up Rainie, director of the Digital Future Center, to get his take.
First, Rainie said, it is important to note that these experts often presented nuanced views, predicting potentially negative or positive effects depending on how AI is used. “One of the things many of these experts were basically saying is, if you become too dependent and you outsource too much of the cognitive burden that every human being experiences in life, that’s when the red light, dangerous warning ought to be going off,” he said.
Second, the findings present a fundamental challenge to educators. “How are we going to be adept at this stuff?“ Rainie asked. “How are we going to stay in control of this stuff? How are we going to co-evolve so that we bring our best, the tools bring their best, and lots of good things happen from that?”
Rainie noted that experts predicted AI would have more of a positive effect on three other human traits:
- curiosity and capacity to learn
- decision-making and problem-solving
- innovative thinking and creativity
When the experts spoke well of AI’s impact, they described it as a powerful tool that people can harness to their benefit. Jeremy Foote, a computational social scientist at Purdue University, for example, wrote that he believes “we will see a flowering of creativity as creative work becomes more accessible to more people. In that sense, AI may actually help us to express our humanity more fully.”
Finally, Rainie said that rather than avoiding AI, higher education would do well to think about how best to marshal it. “There are definitely some voices here that are like, ‘Don’t go in the water. It’s just too dangerous,’” he said. “But there are lots of voices here that talk about co-evolution, co-intelligence, partnerships, and the way in which human intelligence and machine intelligence can serve each other.”
Do you agree with this analysis? What does that mean for your teaching? Do you teach your students about AI as something that can either enhance or harm their uniquely human skills? Write to me at beth.mcmurtrie@chronicle.com and your story may appear in a future newsletter.
Are students using AI wisely?
A new report by Anthropic digs into how college students are using their genAI bot, Claude, in their academic work, which students use Claude most frequently, and how usage varies by discipline. Researchers studied one million anonymized student conversations.
Among the findings: 39 percent of students use Claude.ai “to create and improve educational content across disciplines” and 34 percent use it “to provide technical explanations or solutions for academic assignments.”
STEM students are the most frequent users, with computer-science students in the lead. They make up 37 percent of conversations with Claude, the report notes, even though they represent only 5 percent of U.S. degrees. Natural sciences and math are also overrepresented.
If you read this and are concerned that students are using Claude to cheat, so are the authors. They noted that they see “an inverted pattern of Bloom’s Taxonomy domains,” meaning that Claude was completing “higher-order cognitive functions,” such as creating (40 percent of usage) and analyzing (30 percent). Much less common were lower-order tasks, such as applying (11 percent), understanding (10 percent), and remembering (2 percent).
“The fact that AI systems exhibit these skills,” the report states, “does not preclude students from also engaging in the skills themselves — for example, co-creating a project together or using AI-generated code to analyze a dataset in another context — but it does point to the potential concerns of students outsourcing cognitive abilities to AI. There are legitimate worries that AI systems may provide a crutch for students, stifling the development of foundational skills needed to support higher-order thinking. An inverted pyramid, after all, can topple over.“
The release comes at the same time Anthropic is launching Claude for Education, which includes a “learning mode,” which it says “guides students’ reasoning process rather than providing answers, helping develop critical thinking skills.”
While that may help, educators and students know that there are plenty of ways to circumvent firewalls, to include using other apps.
If you have thoughts on these findings, write to me at beth.mcmurtrie@chronicle.com.
Thanks for reading Teaching. If you have suggestions or ideas, please feel free to email us at beth.mcmurtrie@chronicle.com or beckie.supiano@chronicle.com.
— Beth
Learn more at our Teaching newsletter archive page.