The growing popularity and influence of the National Survey of Student Engagement troubles two researchers, who consider it a poor tool for evaluating institutional quality.
In a paper presented this week at the annual conference of the Association for the Study of Higher Education, Alberto F. Cabrera, a professor in the College of Education at the University of Maryland at College Park, and Corbin M. Campbell, a doctoral student there, challenge the survey’s “benchmarks,” which are intended to measure colleges’ performance in five categories: level of academic challenge, active and collaborative learning, student-faculty interaction, enriching educational experiences, and supportive campus environment.
The individual benchmarks, the researchers argue, have a high percentage error and overlap with one another. “If each of the five benchmarks does not measure a distinct dimension of engagement and includes substantial error among its items, it is difficult to inform intervention strategies to improve undergraduates’ educational experiences,” Mr. Cabrera and Ms. Campbell say.
The paper, “How Sound Is NSSE?,” also examines the connection between students’ performance on the benchmarks and their grade-point averages, testing whether the two measures of success correlate for seniors at “a large, public, research-extensive institution in the Mid-Atlantic.” Mr. Cabrera and Ms. Campbell found that only one benchmark, enriching educational experiences, had a significant effect on the seniors’ cumulative GPA.
The director of the national survey, which is known as Nessie, welcomed its examination. “Lots of schools should be doing this kind of thing. They should be understanding how Nessie works on their campus,” the director, Alexander C. McCormick, said.
He argued, however, that because most of the survey questions relate to students’ current academic year, any GPA comparisons should stick to that same time period. Also, he said, he does not consider the benchmarks to be inherent underlying characteristics, or “latent constructs,” as the researchers’ statistical analysis assumes them to be. That assumption, he said, could make the percentage error higher than it may otherwise be.
Still, the survey has considered revising its measures. “We created these benchmarks to give people a way into the data,” Mr. McCormick said, “but I think they have maybe drawn too much exclusive attention.”
In one potential change, both institutional and national reports on Nessie’s results may no longer treat “enriching educational experiences"—which Mr. Cabrera and Ms. Campbell found to have the highest percentage error—as a benchmark, but instead break out its individual measures.
The new paper is not the first scholarly criticism leveled at Nessie. A paper presented at last year’s conference said the survey asks many questions that are too vague for students’ answers to be meaningful or that fail to consider how human memory can be faulty and how difficult it can be to accurately measure attitudes. Other critics have asserted that the survey’s mountains of data remain largely ignored.