Students whose STEM courses are taught using active learning perform better than those taught with traditional lectures. That was the top-line finding of a widely cited 2014 meta-analysis, and it has been borne out in many other studies since. While research suggests that lecturing remains the dominant form of STEM instruction, the studies on active learning have bolstered high-profile calls for change that a growing number of instructors have taken to heart in recent years.
A new paper suggests, however, that those studies are not as solid, nor their findings as clear-cut, as active-learning proponents would like. Its lead authors, Amedee Marchand Martella, a National Science Foundation postdoctoral research fellow in psychological and brain sciences at the University of California at Santa Barbara, and her father, Ron Martella, a professor in the College of Education at the University of Colorado at Colorado Springs, had long doubted that the evidence on active learning was strong enough to support the kinds of blanket directives — like “just stop lecturing,” — they were hearing.
The paper, “How Rigorous Is Active Learning Research in STEM Education? An Examination of Key Internal Validity Controls in Intervention Studies,” was published in Educational Psychology Review. It documents their examination, with several co-authors, of the studies included in the 2014 meta-analysis and a sample of more recent studies. The authors checked each study to see if it met 12 internal validity controls, like whether the students in the active-learning and lecture conditions were sufficiently comparable and whether the two conditions took place at the same time. The authors determined that high percentages of the studies missed each control, and none met all 12 controls.
This work, the authors argue, shows that better research is needed to truly understand the impact of active learning. But they also argue that comparing active learning to a traditional lecture isn’t really asking the right question. That’s because most of the instructors who use active-learning techniques also lecture — sometimes for a good portion of class time, as studies indicate. But even this loose idea of active learning, Ron Martella says, is more clearly defined than the category “traditional lecture.” He recalls taking an economics course in which the professor stood with his back to the class the entire time he was talking and writing on the board. But that isn’t what most lecture-heavy faculty members are doing.
The message to professors, he thinks, should be more nuanced than simply asking them not to lecture. “I’m telling you after being a professor for over 30 years, most people aren’t going to totally dump lecture, and no longer lecture again and only do active learning,” he says.
Amedee Martella, whose background is in cognitive psychology and who does applied research in education, says she thinks active learning is good. By that, she means it’s important for students to construct knowledge, integrate new knowledge with prior knowledge, and organize information in their memory. But she argues that either active learning or a more passive lecture can lead students to do those things.
The paper calls for more internally valid and more specific studies that investigate how best to use active learning and lecturing, together.
Mixed Reactions
So, what does this paper mean for the researchers who study active learning, the faculty developers who encourage it, and the professors trying to determine the most promising way to teach?
Scott Freeman, the lead author of the 2014 meta-analysis, wrote in an email that there is a need for better research in the field, but he stood by his team’s work. “Many of their points are well-taken: every meta-analyst — no matter the field — tears their hair out about the flaws they see in the studies they are trying to extract interpretable data from,” he wrote. “So in that sense I very much endorse the team’s conclusion that we, as a research community, can do better when it comes to classroom-based research in higher education. But I stand by the criteria we used to screen admissible studies for our 2014 paper and the data and conclusions that resulted, which appear consistent with the subsequently published work that the Martella et al. group analyzed.”
To Regan A.R. Gurung, a professor of psychology at Oregon State University, the Martella paper underscores that “research in the classroom is really tough.” These studies did not meet all 12 internal validity controls, but research rarely does, says Gurung, who teaches research methods. This evidence, says Gurung — who is also associate vice provost and executive director of Oregon State’s Center for Teaching and Learning — “is still enough for us to say to an instructor, Really try and do some more active learning.”
He offers an analogy: “We know so little about sleep. But we still say, Get some sleep.” There are clear benefits, even though we’re still discovering why.
It’s worth remembering the broader context here, says Daniel Reinholz, an associate professor of mathematics and statistics at San Diego State University whose forthcoming book provides equitable-teaching strategies for college mathematics instructors. There may be many unanswered questions about active learning in higher ed, but those questions don’t live in a vacuum. There is still a lot of other, solid evidence on how people learn, says Reinholz. “The research base around this has been really well established in educational psychology and K-12 education for many decades.”
It’s intuitive, Reinholz says, that people learn by practicing: Think about how someone learns a musical instrument, or a sport. “If we look at studies from educational psychology coming out of tightly controlled laboratory settings, meeting the internal validity controls that one would care about, it’s really well established that simply studying more or re-reading something is not the best way to learn it.” What works is practicing, especially when there’s feedback. “Those studies have been replicated thousands if not tens of thousands of times, so there’s really no debate about that research.”
When it comes to classroom studies, Reinholz adds, there’s much more funding for work in K-12 than in higher ed. That means the K-12 literature has a lot more of the “multi-site, randomized, controlled interventions” that provide the strongest evidence. Classroom studies are still difficult, Reinholz notes, but the literature is still pretty clear that “when students get to participate, it helps them learn.” There are several reasons for this: Explaining something and getting feedback both promote learning, and participating also validates a student’s sense of belonging as a learner.
For all these reasons, the Freeman meta-analysis simply demonstrated that students learn in higher-ed STEM courses the same way people learn in general, Reinholz says. “Folks in the K-12 realm already knew that.”
The new paper didn’t really address the literature on active learning in higher-ed STEM courses on its own terms, says Lindsay Wheeler, senior associate director of the Center for Teaching Excellence at the University of Virginia. Some of those studies come out of discipline-based education research, which uses social-science research methods. Some come from the Scholarship of Teaching and Learning, which might not use those formal methods and is more practical in nature. Those two forms of research exist on a continuum, and in most cases the “researchers” conducting them are simply instructors applying some sort of scholarly approach to improve what happens in their own classroom. Much of this work, she says, was never intended to make causal claims.
Many of the professors who want to investigate their own teaching in a scholarly way don’t have the formal training to do so, Wheeler adds. Improving the quality of this research is the responsibility not of those individual instructors but of journal editors and reviewers who can help ensure that what a study shows — and what its limitations are — is well communicated.
There isn’t going to be a single study, Wheeler says, that can definitively tell faculty what to do. But all those studies, taken together, amount to something: “It’s not perfect,” she says, “but what we’re seeing is when we engage students in some type of way, this is beneficial.”
A Potential Downside
But there is a caveat, Wheeler says. One of her own studies suggests that when instructors take up active learning without appropriate support from their teaching center, their courses have larger gaps in failure rates by race than the courses of instructors getting that support.
“In observations of hundreds of classrooms,” Wheeler says, “I see instances where, when students are interacting with each other, there are microaggressions. There are clear implicit biases that students have toward each other. And those get exacerbated the more that we do active learning — unless the instructor is setting up and co-constructing expectations for engagement; talking through what it means to work in groups.”
The fact that active learning can go wrong underscores the need for colleges to better support instructors as they pursue it. That’s been top of mind for Meg Mittelstadt, director of the teaching center at the University of Georgia, because her university is working to create a culture of active learning to fulfill the Quality Enhancement Plan required by its accreditor.
Context matters, and professors have to figure out which active-learning approaches fit their own. Then they can turn to the research. Mittelstadt also hopes that future research will include more examples of professors describing in Scholarship of Teaching and Learning papers how a technique worked out in their own classroom, which will help guide other faculty members interested in trying it.
There are all kinds of questions about teaching that research could help answer. But in the end, what most professors are looking for are good ideas about what is likely to work for them — and how to go about incorporating those.