Hypothetical scenario: Next year, the American Historical Association releases a report recommending that generative AI “should be available for every student during every history class. Students should be permitted to use generative AI for all work including tests.”
Sound far-fetched? The National Advisory Committee on Mathematics Education issued the same statement in 1975 — about calculators.
We live in an age when students of all ages have ubiquitous access to highly sophisticated computers. But back then, the emergence of affordable, pocket-sized calculators — capable of performing only the simplest arithmetic — induced panic among math teachers. How the field of mathematics responded offers lessons for today’s faculty members who are struggling to figure out how to deal with generative AI.
Rote memorization of numerical procedures had long dominated math instruction in the United States. Horrified at the thought of their students never learning how to perform long division with pencil and paper, some teachers banned calculators from their classrooms. Other instructors argued that calculators, by eliminating the need to expend so much classroom time on basic arithmetical routines, allowed them to focus more on teaching mathematical reasoning, data interpretation, and problem solving. It was unrealistic to demand that students do things by hand in the classroom when calculators had become commonplace outside of it. To these teachers, the world had changed and math instruction needed to change with it.
The reformers, in a sense, won out. Although many math teachers initially employed calculators only as an aid to existing praxis — for example, by allowing students to verify answers they had first obtained on paper — mathematics pedagogy evolved. National math-education organizations instituted new curricular and teacher-training standards. Teachers eventually accepted the calculator — and later its more powerful cousin, the personal computer — as important tools.
With generative AI, educators in the humanities and social sciences now face the same technological predicament that math teachers experienced 50 years ago. Reading has been central to the study of philosophy, history, and other humanistic fields for centuries, while writing has been the main vehicle for training students to become better thinkers and for assessing how good they are at it. Yet AI’s facsimiles of both activities are now indistinguishable from the real thing.
To make matters worse, many undergraduates, having acquired the attitude in elementary and high school that reading and writing are tedious chores devoid of meaning, have no qualms about letting AI chatbots do this work for them. For a significant number of faculty members in the humanities and social sciences, evaluating students has now become a demeaning exercise in grading robots.
Many professors are at a loss as to how to respond to AI. Some have resurrected assessment instruments from the pre-digital era, such as hand-written, proctored essay exams or in-person oral presentations, tactics that introduce a myriad of other complications. Others have chosen to remain willfully ignorant, holding on to the delusion that their course content and teaching methods are so inherently exciting that using AI would never enter their students’ minds.
The solution for faculty members is twofold: We need to acknowledge that AI has rendered much of our pedagogical repertoire obsolete and then we need to adjust — just as math teachers did when pocket calculators first appeared in the 1970s.
However, minor tweaks to assignments and exams will do little to counter the fact that chatbots can distill thousands of pages of scholarship down to a few paragraphs in seconds, and may have Ph.D.-level intelligence in only a few years. Instead, faculty members need to move from the “what” pedagogy of the industrial era — students passively ingesting a heavily curated body of information for later regurgitation — to a “why” paradigm that turns students into builders of new knowledge through creative problem solving.
We must shift to a project-based course design that leverages students’ intrinsic curiosity while obviating the benefits of AI to achieve this goal. Here’s what that might mean in practice:
- First, grant students the autonomy to select an unresolved real-world problem of interest that is relevant to the course’s subject.
- Then make them responsible for inventing a novel but pragmatic solution to the problem they have chosen.
While AI systems excel at retrieving, analyzing, and synthesizing information, they are far less proficient at forming and evaluating options for dealing with complex social problems. That limitation becomes readily apparent to students if they try to delegate the entire problem-solving process to a machine.
AI cannot yet replicate the benefits that come from these human interactions.
Most students will need to become more proficient at thinking creatively to contribute valuably to their projects. That’s why, in my own courses, I first offer opportunities for students to experiment on unstructured problems that have no single correct answer. For example, in a comparative-politics course, I might ask students to assume the role of political-risk consultants and, within 20 minutes, identify which of four African states is the most suitable location for a German company’s new manufacturing plant. Students must justify their reasoning and through metacognition reflect on what they learned about problem solving as part of the exercise.
Professors can use problem-oriented projects regardless of academic discipline. For example:
- In an English literature course, students could adapt Mohsin Hamid’s How to Get Filthy Rich in a Rising Asia to a setting 30 years into the future, when unpredictable monsoon rains and extreme heat have made life in a major South Asian city even more turbulent. The instructor could have students build a plot around the protagonist’s son navigating climate-induced challenges after regaining control over the family business, but specify that, in doing so, they must preserve the original work’s stylistic conventions and core themes. This project requires that students simultaneously engage in literary analysis, creative writing, and environmental modeling.
- Similarly, students in a psychology course could develop an intervention that deals with a specific mental-health issue at their college, and pitch the idea to campus administrators. To design a realistic intervention, they would need to construct testable hypotheses that respect ethical and legal constraints, identify how to marshal resources, analyze data collected from preliminary surveys, and create a plan for assessing the proposal’s effectiveness.
Projects with such indeterminate outcomes elicit emotional investment and original thinking from students, making AI use less attractive.
Another advantage that such projects have over other pedagogies is that they are conducive to making learning a team sport. In most organizations, people work in groups and productive collaboration is critical to success. Good ideas can arise from conversations with colleagues, and people frequently learn tremendous amounts by teaching others. Working on a shared goal also fosters a sense of community. AI cannot yet replicate the benefits that come from these human interactions.
Finally, projects embed the development of skills within a meaningful context, making it easier for students to understand how they can transfer those skills to solving other problems in the future. For example, if students learn Philip Tetlock’s forecasting method to predict the likelihood of an armed conflict in a particular location, they will realize that they can use the same technique to generate forecasts on whatever topics they wish. AI chatbots, in contrast, respond only to the immediate specifics of the prompts they are fed.
By emphasizing problem solving and project-oriented teaching, I am not arguing for leaving students ignorant of basic principles while they pursue fruitless lines of inquiry. An introductory economics student who doesn’t understand the concept of interest rates will not be able to independently produce an authoritative analysis of Federal Reserve monetary policy by the end of the semester.
Faculty members should follow the lead of medical schools that use the curricular model of problem-based learning, in which untrained students engage in the task of diagnosing patients under the preceptorship of experienced physicians.
As noted by the computer scientist Mitchel Resnick in his 2017 book Lifelong Kindergarten, educational systems are “stubbornly resistant” to change. Instruction in the humanities and social sciences has remained stuck in a print-age logic despite technological innovations like the internet, lack of student interest, and public perceptions of irrelevance.
With AI, what used to work in the college classroom to some small extent — telling students which questions deserved answers and what those answers were — no longer works at all. It’s time we recognized the need to do things differently.