Many higher-education studies may reach false or fairly meaningless conclusions by relying on students’ assessment of their own academic performance, two new research papers suggest.
The papers, being presented this week at the annual conference of the Association for the Study of Higher Education, found that many students paint a distorted picture when asked in surveys about their own academic success—a pattern that potentially skews researchers’ overall findings and comparisons between various student populations.
The papers’ assertions have serious implications for higher education. Many education researchers rely on college students’ accounts of their own performance because of the difficulty of collecting school transcripts and other academic records. (Among the widely cited higher-education studies that use students’ self-reported test scores are the College Student Inventory, administered by Noel-Levitz, and the Cooperative Institutional Research Program Freshman Survey, administered by the Higher Education Research Institute at the University of California at Los Angeles.)
Researchers also use students’ reports of their own educational progress and growth to try to measure the long-term effect of various educational programs and strategies. Those who use self-reported data from students typically insert caveats about having done so in presentations of their findings, but the degree to which this methodology distorts their conclusions may not be fully understood by the researchers or their audiences.
In one of the papers being presented this week, James S. Cole, an assistant scientist at the Center for Postsecondary Research at Indiana University at Bloomington, and Robert M. Gonyea, an associate director of the National Survey of Student Engagement, compared the scores that about 25,000 students had reported earning on the ACT and SAT with their actual scores on those tests. In many cases, the researchers found, students had incorrectly reported their scores—most often by inflating them.
The students’ estimates of their scores were taken from a 2007 survey of first-year college students, administered by Mr. Gonyea’s organization, which asked respondents to “please write your scores below (as best you remember.)” The students’ actual scores were subsequently reported to his organization in a 2008 survey of the colleges where they were enrolled.
In trying to determine why students might incorrectly report their scores, the researchers focused on two likely explanations: memory failure, which would be evident if the degree to which students erred reflected the complexity of the information they were being asked to recall, and “motivated distortion,” in which students provide incorrect answers intended to give a good impression or preserve their own self-esteem.
The researchers found that both forces appeared to be at work.
Although students were fairly accurate in recollecting their single composite score on the ACT test, they were significantly less likely to accurately answer the more complex question of how they had done on each of the three sections of the SAT: critical reading, mathematical reasoning, and writing.
And, in a development that suggested “motivated distortion” also was at work, those students who erred in recalling their scores on various parts of the SAT test were much more likely to overstate than understate how well they had done. Low-achieving students exaggerated how well they had done on the SAT more often, and by greater amounts, than high achievers.
Positive Self-Assessments
In the other paper being presented this week, Nicholas A. Bowman, a postdoctoral research associate at the Center for Social Concerns at the University of Notre Dame, compared students’ estimates of their educational progress with their actual progress on objective measures, like tests of critical thinking. He also looked at subjective measures, like surveys that asked how much the students agreed with statements such as “I know myself pretty well” to chart changes in their perceived self-awareness over time.
Mr. Bowman based his analysis on data on 3,000 students at 19 colleges collected as part of the Wabash National Study of Liberal Arts Education, a long-term study of what students who were freshmen in the fall of 2006 have since learned.
Mr. Bowman found that students’ reports of their own progress did not correspond with their progress as charted by either the objective or subjective measures administered to them over time. Students might claim to have become better critical thinkers when their performance on tests of that skill remained flat. Or they might say they had become much more self-aware over time when comparisons of their most recent and past survey responses showed no change in how much self-awareness they attributed to themselves.
Moreover, some groups of students—like those who were Hispanic, from low-income backgrounds, or not of traditional college age—were more likely than others to overstate how much progress they had made, raising the possibility that colleges using self-reported student data may be failing to detect how much some populations are lagging behind others.
Mr. Bowman makes clear that self-reported student data can answer some important questions, like whether students feel satisfied with their college experience. But, he warns, “self-reported gains can lead administrators and practitioners not only to mistakenly endorse practices and programs that have no true longitudinal impact, but also to mistakenly reduce or eliminate funding for programs that actually yield longitudinal improvements.”
Charles F. Blaich, who is heading up the Wabash study for Wabash College’s Center of Inquiry in the Liberal Arts, said on Tuesday that he is familiar with Mr. Bowman’s works and believes it points out a serious research challenge. Because the Wabash study uses both self-reported data and other measurements, he said, “I think our research is illuminating the problem.”