As pressure mounts on colleges to document what their students learn, it remains tough to judge from outside the classroom how much knowledge they gain from their academic experience.
The traditional measure of learning is the course grade. Nothing says academic success more succinctly than an A.
But an A is subjective. Skeptics note that course requirements vary depending on the professor, the department, and the institution. Grades are often inflated.
Alternative methods to document learning have arisen in the form of standardized tests of critical thinking, which are meant to assess students’ ability to analyze material at a collegiate level. The strength of such tests is in their ability to provide results that can be compared across institutions.
But what if neither of those methods says much about the teaching, expectations, and assignments that students encounter in their courses?
According to this view, the nature of teaching and learning should be measured instead of relying solely on an outcome like a grade or a test. Students should be exposed to courses and assignments that require them to analyze information and apply it to new contexts, reflect on what they know, identify what they still need to learn, and sort through contradictory arguments.
Such opportunities are described in research literature as “deep approaches to learning.” They figure prominently in Thursday’s release of data from the National Survey of Student Engagement. While Nessie, as the survey is known, has long sought data on those practices, this year’s report replicated and extended the previous year’s findings, which showed that participation in deep approaches tends to relate to other forms of engagement, like taking part in first-year learning communities and research projects.
Deep approaches have been a subject of education research since at least the mid-1970s, after Ference Marton’s and Roger Säljö's pioneering work in Sweden. Those educational psychologists analyzed how students responded to an academic article. They found that one group used “surface” approaches, like rote memorization, while the other took “deep” ones, in which students sought to understand the material’s purpose, meaning, and significance.
Students responding to Nessie were asked to describe how often—very much, quite a bit, some, or very little—during the current academic year they had analyzed the basic elements of an idea; synthesized information into new and more-complex interpretations; judged the value of an argument; and applied concepts to new contexts. The goal is for colleges to use the results to drive improvements in how they deliver undergraduate education.
“The stuff we measure in Nessie is process,” said Alexander C. McCormick, the survey’s director and an associate professor in the School of Education at Indiana University at Bloomington. “College students should be having experiences that call upon them to use more higher-order thinking, regardless of whether or not it connects to given tests. Students should have to do more analysis and synthesis than memorization.”
A big divide emerged among the seniors who participated in the survey. Those who reported engaging most regularly in deep approaches to learning did so more than twice as often as did those in the bottom quartile.
The different levels of participation in the deep approaches manifested themselves in other behaviors that facilitate learning. Seniors in the top quartile in participating in deep approaches, for example, spent about five more hours per week preparing for class than did their peers at the bottom.
Chicken-and-Egg Problem
Nessie’s measures of deep learning figure prominently in other newly released research, in which the value of those measures is both reinforced and challenged.
Corbin M. Campbell, an assistant professor of higher education at Columbia University’s Teachers College, and Alberto F. Cabrera, a professor of higher education at the University of Maryland at College Park, looked at responses to Nessie’s deep-learning questions from about 1,000 students at a large, public research institution.
Ms. Campbell and Mr. Cabrera found that the three deep approaches that Nessie measures—higher-order, integrative, and reflective learning—were tightly interconnected and resulted in deep learning, as the two researchers describe in a paper to be presented on Thursday at the annual meeting of the Association for the Study of Higher Education.
The results, they write, suggest that faculty members and institutions can use Nessie’s deep approaches to assess students’ progress over time.
But the two professors also found that those measures of deep learning bore no relationship to students’ grade-point averages, a result Ms. Campbell found surprising.
That finding gave rise to what the authors describe as a chicken-and-egg problem: Was the lack of a relationship between deep learning and grades evidence of a problem with the Nessie measures or with grading?
“The results are intriguing,” Ms. Campbell said in an interview. “There’s something amiss here.”
Perhaps, she said, students are engaged in deep learning but are not being rewarded with good grades. Or perhaps students are receiving high marks but not learning deeply. “My sense,” she said, “is that it’s the former, not the latter.”
The Nessie measures of deep learning also seem to have a disconnect with standardized tests of critical thinking.
Thomas F. Nelson Laird, an associate professor in the department of education leadership and policy studies at Indiana University and principal investigator of the Faculty Survey of Student Engagement, wrote in 2008 that he had found relationships between Nessie’s deep approaches and tests of moral reasoning and of intellectual dispositions like curiosity and open-mindedness. He could not find a link between the deep approaches and scores on the California Critical Thinking Test, which colleges and employers use to gauge test takers’ ability to analyze, make inferences, and carry out inductive and deductive reasoning.
Similarly, Robert D. Reason, now an associate professor in Iowa State University’s department of education, wrote in a 2010 paper that he could find no relationship between Nessie’s deep approaches and scores on the Collegiate Assessment of Academic Proficiency, or CAAP. It serves purposes similar to those of the California test and is often used to evaluate colleges’ academic programs.
Mr. Laird said the disparity between the deep approaches and the results on standardized tests probably reflects an inconsistency between what most people in higher education mean by the term “critical thinking” and how it is actually tested. Most academics, he said, see critical thinking as the ability to analyze ambiguous or contradictory arguments and reach a conclusion in situations where the right answer is unclear, if one even exists.
The tests, he said, do something different. They test inductive and deductive reasoning, and they contain little ambiguity. “Critical-thinking tests have clear answers,” he said.
Higher-education experts at ACT, the testing service that created the CAAP, acknowledged that there is a limit to how well a multiple-choice test can measure students’ skill at sorting through problems with uncertain answers. But they also said that facets of their test that assess students’ ability to form, judge, and extend arguments can offer insights into how well students solve ambiguous problems.
In the end, argued Mr. Laird, no tool, on its own, can capture learning across disciplines or institutions. Researchers should seek out several measures to paint a picture of what happens during a student’s college education.
The strength of Nessie’s deep approaches to learning, he said, is that they call upon the sorts of skills that both faculty and employers seem to want students to develop. While Nessie data suggest that colleges can do a better job of giving assignments that truly tap into critical-thinking skills, Mr. Laird said students, too, bear some responsibility for taking advantage of such opportunities.
“If they’re not in college to get this stuff out of college,” he said, “maybe it won’t matter what we do.”