Whenever someone determines that there is a health benefit to a food, say carrots, there is immediately a rush to figure out what specific component of the food is good for you. The idea is to find the magic ingredient and put it in a pill so that people can avoid the whole tiresome process of chewing and swallowing carrots. Inevitably, after a few years of people consuming near-toxic doses of beta carotene, we learn that there is no way to successfully isolate it and that the health benefits come from actually eating carrots.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
Whenever someone determines that there is a health benefit to a food, say carrots, there is immediately a rush to figure out what specific component of the food is good for you. The idea is to find the magic ingredient and put it in a pill so that people can avoid the whole tiresome process of chewing and swallowing carrots. Inevitably, after a few years of people consuming near-toxic doses of beta carotene, we learn that there is no way to successfully isolate it and that the health benefits come from actually eating carrots.
We seem to be on a similar path with assessment. The fundamental claim of the assessment industry is that by measuring one thing — student learning — it can show us how to improve our courses, curricula, and colleges. But what if learning isn’t the most important element of a college education? Then the underlying assumption behind assessment would be wrong.
The assessment industry is not known for self-critical reflection. Assessors insist that faculty provide evidence that their teaching is effective, but they are dismissive of evidence that their own work is ineffective. They demand data, but they are indifferent to the quality of those data. So it’s not a surprise that the assessment project is built on an unexamined assumption: that learning, especially higher-order learning such as critical thinking, is central to the college experience.
It’s not just assessors who make this assumption. Almost any field that does not teach vocational skills makes some claim about teaching students critical thinking. Educational reformers also embrace learning as the defining feature of college. For example, the Lumina Foundation’s Degree Qualifications Profile “provides a qualitative set of important learning outcomes, not quantitative measures such as numbers of credits and grade-point averages, as the basis for awarding degrees.”
ADVERTISEMENT
The armies of consultants, software vendors, journals, foundations, institutes, and organizations are operating on a false premise.
If student learning is the real value of college, we are doing a terrible job. There is little evidence that college changes students’ capacity for critical thought.
The latest nail in the coffin comes from within the assessment world itself. In an article in Change, Daniel Sullivan, president emeritus of St. Lawrence University and a senior fellow at the Association of American Colleges & Universities, and Kate McConnell, assistant vice president for research and assessment at the association, describe a project that looked at nearly 3,000 pieces of student work from 14 institutions. They used the critical-thinking and written-communication Value rubrics designed by the AAC&U to score the work. They discovered that most college-student work falls in the middle of the rubric’s four-point scale measuring skill attainment.
Things get more interesting when the scores are broken out by the number of credits students have earned. The difference between seniors and freshmen is almost nonexistent (a 0.2-point improvement in critical thinking and a 0.18-point improvement in written communication). Perversely, seniors score slightly higher than first-year students but lower than sophomores and juniors. It appears, therefore, that how long students have been in college has little effect on how they perform on the Value rubrics.
So what is going on here? Maybe the rubrics are flawed, and they don’t actually measure student learning. This is a problem for assessment. If one of the most sophisticated assessment tools can’t measure student learning even when deployed by seasoned educational researchers, then why should we think that the thousands of departmentally devised assessment plans will work any better? But an equally plausible explanation would be that students don’t learn much about critical thinking or written communication in college.
ADVERTISEMENT
There is substantial evidence to suggest that conclusion. Richard Arum and Josipa Roska’s 2011 book, Academically Adrift, used data from the Collegiate Learning Assessment to show that a large percentage of students don’t improve their critical thinking or writing (the same two types of higher learning that Sullivan and McConnell looked at) in the first two years of college. A 2017 study by The Wall Street Journal used data from the CLA at dozens of public colleges and concluded that the evidence for learning between the first and senior years was so scant that they called it “discouraging.”
What I suspect is happening here is that the CLA and Value rubrics are measuring some basic and mostly stable component of students’ intellect. We may be able to teach certain types of procedural skills, but higher-order skills like critical thinking may be mostly dependent on raw intellectual horsepower and thus are only minimally subject to change. When we do see change in higher-order skills, it’s probably more attributable to the maturation of students’ brains than to the effects of education.
Letting go of the idea that learning is the chief benefit of a college education would explain why decades of increased investment in assessing student learning has yielded so few measurable improvements in actual student learning. Is the whole assessment project, with its armies of consultants, software vendors, journals, foundations, institutes, and organizations, built on a false premise?
I am not suggesting that college is a waste of time or that there is no value in a college education. But before we spend scarce resources and time trying to assess and enhance student learning, shouldn’t we maybe check to be sure that learning is what actually happens in college?
ADVERTISEMENT
We recognize that there are benefits to a college education. But rather than just accepting that those benefits are impossible to boil down to a single critical feature, we have hubristically settled on learning and decided it is the main actor in the complex drama of college. Instead of looking for the magic ingredient in the college experience and trying to repackage it in an easier-to-swallow form (competency-based education, for example), maybe we should just admit that a four-year, face-to-face college education is a good thing, but we’re not really sure why.
Erik Gilbert is a professor of history at Arkansas State University. He blogs at badassessment.org.