Since generative-AI models like ChatGPT surfaced last November, they’ve caused a frenzy in college classrooms.
Professors report that students are using ChatGPT to write essays or complete math homework. Many instructors are leveraging anti-plagiarism software or tweaking assignments to try to prevent cheating.
To computer scientists, however, the rise of artificial intelligence is no different than the advent of the pocket calculator or the Google search engine: It’s a tool that, if used correctly, can help people learn faster and think on a deeper level.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
Since generative-AI models like ChatGPT surfaced last November, they’ve caused a frenzy in college classrooms.
Professors report that students are using ChatGPT to write essays or complete math homework. Many instructors are leveraging anti-plagiarism software or tweaking assignments to try to prevent cheating.
To computer scientists, however, the rise of artificial intelligence is no different than the advent of the pocket calculator or the Google search engine: It’s a tool that, if used correctly, can help people learn faster and think on a deeper level.
Since Alan Turing and John McCarthy, both computer scientists, helped create artificial intelligence, in the 1950s, computer scientists have been researching the technology; McCarthy founded Stanford University’s Artificial Intelligence Laboratory in 1963. Several computer-science professors told The Chronicle that they plan to use ChatGPT in classroom assignments.
ADVERTISEMENT
The field is also grappling with how to use the technology responsibly. Purdue University recently became one of the first institutions to offer AI majors that emphasize ethics and psychology.
Aniket Bera, an associate professor of computer science at Purdue, said he had been collaborating with psychology and psychiatry professionals to make AI work better for them.
“Instead of running away from it, I think we should strive to embrace it,” Bera said. “Once people start embracing, we’ll try to understand what the problems of these things are and then try to fix it.”
As colleges prepare for a year in which ChatGPT and similar programs will become increasingly pervasive, the field of computer science offers a model for how higher ed might integrate artificial intelligence into learning. At the same time, experts say, computer-science professors should collaborate across disciplines — and connect with their departmental colleagues — to understand and respond to the technology’s pitfalls.
ADVERTISEMENT
Using AI in the Classroom
Computer-science faculty members like Peter Stone are incorporating AI into the curriculum by emphasizing the technology’s flaws.
While ChatGPT can generate basic code, it makes mistakes, especially as the code gets more complicated, said Stone, a professor of computer science at the University of Texas at Austin. Professors can teach basic coding in introductory classes and have higher-level students edit the code that AI generates.
Bruno Ribeiro, an associate professor of computer science at Purdue, gives students unique coding problems that seem simple on the surface but have slight variations that often trip AI up. He then has students identify where the program went wrong and fix the code.
“At the end of the day, what they really learn is how to think and how to check things and how to verify if something is right or something is wrong,” Ribeiro said. “In my classes I tell them, ‘Look, if ChatGPT gives you the answer, that’s great, but if it’s wrong, you are responsible for it.’”
ADVERTISEMENT
Beyond computer science, a range of liberal-arts classes can use similar methods to promote AI literacy and encourage students to read critically, said Anna Mills, an English instructor at California’s College of Marin who researches AI in writing courses.
In my classes I tell them, ‘Look, if ChatGPT gives you the answer, that’s great, but if it’s wrong, you are responsible for it.’
Students can, for instance, analyze a conversation with ChatGPT as an assignment and identify signs of fabrication, biases, inaccuracies, or shallow reasoning. Johanna Inman, director of the Teaching and Learning Center at Drexel University, in Philadelphia, suggests that faculty members have students use AI to write a first draft of an essay and show what they might change. Or instructors could include AI as a contributor to group discussions.
Identifying AI’s flaws motivates students and helps them build confidence, which can discourage cheating, Mills said.
“Pointing out where it still really messes up is very powerful, both for learning about what these systems are and for empowering students to see their own strengths as human thinkers,” Mills said. Certain courses can also teach students the best ways to communicate with AI tools, she added, by using critical thinking and rhetoric.
ADVERTISEMENT
Many writing instructors still worry that incorporating AI will prevent students from developing the skills that come from learning how to read and write, Mills said. Some computer-science professors expressed similar concerns, saying that students need to learn the foundations of coding to be able to change and advance technology in the future.
Stone suggested instructors still emphasize the need to learn foundational skills in introductory classes by clearly explaining to students when AI is appropriate to use. Once students have learned basic coding, they could use AI to complete their assignments faster in advanced classes, he said.
“My job is to instruct students on what they need to do to learn the concepts that are covered in the syllabus of the course,” Stone said. “If I tell them, ‘Here’s an assignment, do this work without ChatGPT,’ and then they go and do it with ChatGPT, they’ve basically lost that opportunity to learn.”
Academic integrity is another pitfall. AI is accessible to anyone who can easily use the internet, and it’s less stigmatized than other methods of cheating, Mills said. Though professors can sometimes detect when a student uses AI, it can be difficult, she said, and many detection devices, such as Turnitin, can be unreliable.
“It’s a significantly increased level of students using it and feeling that temptation,” she said.
ADVERTISEMENT
UT-Austin has started offering seminars on how professors can incorporate AI into their lessons as well as strategies to communicate when students are allowed to use it, Stone said. Drexel’s Teaching and Learning Center also offers teaching tips and panels on the best ways to use AI in a variety of classes, Inman said.
“There’s more of a danger in not teaching students how to use AI,” she said. “If they’re not being taught under the mentorship of scholars and experts, they may be using it in ways that are either inappropriate or not factual or unethical.”
Combining Forces
Computer science is also taking the lead in understanding AI’s complicated ethical dilemmas, such as lack of access, who controls the technology, and its uses, said Chris Piech, an assistant professor of computer-science education at Stanford.
It’s not clear how many AI-specific majors or courses exist, but experts said that more colleges would probably begin offering AI degrees that connect with other departments through teaching and research. UT-Austin recently introduced an online master’s program in AI, and Carnegie Mellon University offers a bachelor of science in AI.
The B.A., which is offered through the university’s philosophy department, requires several introductory courses in computer science that focus on the technical parts of AI, while primarily teaching the ethics and philosophy of the technology.
One course explores how video games pose classic philosophical questions, said Javier Gomez-Lavin, an assistant professor of philosophy, who created the class. Students play games for a portion of the course, and eventually design new games that tackle those questions.
“Given that there’s going to be large language models that are going to have new and kind of unprecedented impacts on the way people work,” Gomez-Lavin said, “how can we actually prepare students to leverage the best of critical thinking and have some inside knowledge of these systems themselves?”
ADVERTISEMENT
The B.S., which is housed in the computer-science department, offers more-advanced technical classes while requiring philosophy, psychology, and ethics courses. Through the major, students understand how human beings interpret and use information that comes from AI, said Chris Clifton, interim head of Purdue’s computer-science department.
“We’re not just looking at the AI system itself and what it says,” Clifton said. “We’re actually looking at the outcome in terms of the final effect on the person who’s impacted.”
Computer-science classes encourage students to analyze AI’s biases as well as problems it causes when it’s used in the real world. Since people created these systems, many of the problems, such as gender and cultural biases, are fundamentally human.
Some of Ribeiro’s students at Purdue are investigating AI’s flaws in accounting for unexpected events. As a case study, they learned about Zillow Offers, a program created by the popular housing website Zillow. The program used an AI algorithm to determine how much to offer in buying a house. The algorithm worked well in tests but didn’t account for changes in the housing market, so many of its predictions were wrong when the company introduced it publicly. The result was a $300-million loss.
“As an educator, the best we can do is give them foundations they can build on because it’s very hard to determine in five years what the new method will be,” Ribeiro said. “What we can do is make sure they understand the advantages and drawbacks of these methods.”
ADVERTISEMENT
Mills also emphasized the need to proceed with caution. While the technology offers many exciting ways to teach and learn, proceeding with little regulation is dangerous, she said.
“People are exploring it, it is very exciting, and we can, to some extent, share that with students,” she said, “even as we’re strongly emphasizing that as soon as you start to learn about it, you learn about the fabrications and bias and ethical concerns.”
AI in the Work Force
Many computer-science professors believe incorporating AI into the classroom is the best way to prepare students for the future of their industry. Ignoring those tools would also be a disservice to students outside computer science who will probably need to use them in their careers as well, said Drexel’s Inman.
By learning the flaws of the technology and the basics of how it operates, computer-science students will be able to improve it once they begin their careers, said Stone, the UT-Austin professor.
ADVERTISEMENT
“It’s not that people are going to lose jobs to AI. People who don’t know how to use AI are going to lose jobs to people who do know how to use AI,” Stone said. “We need to train our students to use the tools and to know what’s out there.”
But that doesn’t mean offering classes like ChatGPT 101.
The AI models that are popular now are likely to change in a few years, said Purdue’s Clifton. He believes that ChatGPT will create an incremental shift, not a fundamental one. In a few years, he said, the technology could be completely different, and students entering the industry need to learn how to adapt to a tool that may not even exist yet.
“One of the key things we teach people is to learn new things because throughout their careers, that’s what they will need to do,” Clifton said. “This is just another new thing.”