Much of the debate about educational quality tends to pull toward extremes: Either America’s colleges are the envy of the world, or they’re of questionable worth. A new study that used an unusual methodology to quantify rigor and teaching suggests a more moderate and nuanced view.
“Our findings are showing we’re in between,” said Corbin M. Campbell, an assistant professor of higher education at Teachers College at Columbia University. “There’s room to grow here.”
Ms. Campbell was the principal investigator for the College Educational Quality project, a study of two selective, midsize research institutions. One was public, the other private.
She and a team of 10 graduate students spent a week in the spring of 2013 observing 153 courses. They analyzed 149 syllabi.
The methodology allowed them to examine academic rigor and the quality of teaching in ways that conventional research tools, like student surveys, standardized tests, or faculty self-reports, often fail to adequately convey.
“We could see how the instructors engaged students,” she said.
The graduate students observing the courses were pursuing degrees in higher education. As often as possible, they watched courses in the discipline in which they had earned their baccalaureate degree.
By reading syllabi and conducting classroom observations, they drew conclusions about two broad categories, academic rigor and teaching quality.
The universities both scored roughly in the middle of the scales for each category, and the two institutions’ scores were statistically indistinguishable. For both, academic rigor was 3.33 on a 5.5-point scale. Teaching quality was 2.97 out of 5.
Academic rigor depended on three factors. The first was the researchers’ assessment of the cognitive complexity of the course, which they based on the revised version of Benjamin Bloom’s taxonomy of educational objectives. The second was the quantity and complexity of work assigned. The third was the level of expectations set for students’ preparation for and participation in class.
The metrics for teaching quality reflected how well the instructor introduced central concepts, called forth students’ prior knowledge of the material, and helped students reconcile the differences between what they thought they knew about the subject and what they had just learned.
“We are seeing faculty doing a good job of teaching in-depth subject matter,” Ms. Campbell said.
Class Size and Length
But most of the faculty members failed to use effective teaching practices. Just over half of the observed courses included in-class activities. About 40 percent had discussions. Classes with those attributes yielded significantly higher scores for academic rigor and teaching quality than those without them.
The length of the class also proved significant. Those that met for longer than an hour produced more benefits than those that were an hour or shorter.
Class size played a role, too. A clear dividing line emerged. Classes of 25 students or fewer were associated with significantly more academic rigor and teaching quality than those that were larger, they found.
No statistically meaningful difference could be seen between courses of 26 to 50 students, 51 to 100, or more than 100.
“If you’re going to have a small class size, it really matters that it’s pretty small,” Ms. Campbell said.
She cautioned that her pilot study may not be broadly applicable because it looked at only two institutions. A follow-up study will look at 10 more.
Ms. Campbell added that she hoped that studies like hers could broaden the conversation about the outcomes of postsecondary education. Researchers and public officials have been focusing on how much money recent graduates earn.
“When you talk about outcomes, it can’t just be economic,” she said. “It has to be educational as well.”