Outcomes assessment is an epistemological quagmire, a problem unnoticed by many of the practice’s strongest advocates. Here’s why. Faculty members assign grades to students at the end of every course. Either (1) we know that on the whole those grades accurately measure the degree to which a student has mastered the course material and achieved the objectives of the course, or (2) we do not know. The very idea of outcomes assessment is predicated on Option 2. Unfortunately, the skepticism that drives outcomes assessment ultimately drives it to epistemic suicide.
If we know that grades measure outcomes adequately, then that’s all we need. Want to see how effective Professor Marcus’s teaching is in “Symbolic Logic”? Look at the course objectives, and check the grade distribution.
Ms. Marcus herself can look at the students’ performance to determine what’s working and what’s not working. She can use the evidence of their grades to decide whether a new textbook might be needed, more time needs to be spent on metalogical proofs, or if, in fact, she needs to change little in her current approach, since the students are mastering the material.
But how can we be sure that Ms. Marcus’s students are learning what she says they are learning? If we don’t have some evidence, some demonstration of that claim, then how can we hold her accountable for her classroom practices or the grades she gives?
We can’t use grades themselves to prove the veracity and legitimacy of grades, since that’s plainly circular, so really Option 2 is the way to go. Accordingly, we need some other tool of assessment to determine student success or failure. Let’s call this outcomes assessment. For the point of the present argument, it doesn’t matter what this tool is: multiple-choice questions, portfolios, oral examinations, and so on.
Now, the outcomes-assessment tool faces the same dilemma that grades did: Either (1) we know that it accurately measures the degree to which a student has mastered the course material and achieved the objectives of the course, or (2) we do not know.
If we do know that outcomes assessment reliably measures student success, then obviously we should use it instead of grades, whose trustworthiness we just discounted. Or, better, we should replace our faulty means of assigning grades, whatever they may be, with the outcomes-assessment tool and use that to determine grades.
Now that we’re assigning grades using the outcomes-assessment tool, we do know that on the whole, these grades accurately measure the degree to which a student has mastered the course material. Outcomes assessment, as a further step, is no longer needed. As Wittgenstein wrote, we can throw away the ladder now we have climbed up on it.
On the other hand, how can we be sure whether outcomes assessment really works as advertised or has all the accuracy of a Soviet agricultural report? We still need to hold the faculty members (or outcomes-assessment officers) who devised the outcomes-assessment tool accountable for their claims and assessment practices.
We must ensure that the tool is not overly crude, imprecise, idiosyncratic, or written solely to game the bureaucracy. Obviously we can’t use the outcomes-assessment tool itself to prove its own veracity, since that, again, is circular.
We’re compelled once more to be skeptics: We don’t know that the outcomes-assessment tool reliably indicates student achievement. We can’t merely assume without reason that it measures learning outcomes, and, by the same reasoning that justified outcomes assessment to start with, we need some other means of assessment to determine student success or failure. Once we use that new tool, then we can see how accurate outcomes assessment was. Let’s call this new procedure outcomes-assessment assessment.
It should be clear that we’ll need to prove the reliability of the outcomes-assessment-assessment procedure, too, and will therefore need an outcomes-assessment-assessment assessment, ad infinitum. In short, the demand that we prove the reliability of every method of gaining beliefs leads directly to a vicious regress. Ultimately we are left with skepticism: We have no knowledge at all.
The argument just given is hardly news among epistemologists; the problem of how we can trust our means of gaining beliefs goes back to Plato’s Allegory of the Cave. Just think about perception for a moment. If you doubt its general reliability, how can you test it? Not by perception, obviously. Yet any method you might propose for getting at the truth other than by perception faces the same doubts and demands for justification. The global skeptic who insists that all procedures for determining the truth be certified in advance, or confirmed by some other, independent method, tends to wind up hoist by his own petard.
Instead of improving our knowledge, the outcomes-assessment mania robs us of it by setting demands that result in skepticism. Does this mean that we have no choice but to blindly trust in the veracity of grading as a means of understanding how much students have learned? No. No more than we must blindly trust perception to deliver the truth.
Psychologists have uncovered all sorts of cognitive biases when it comes to perception—pareidolia, perceptual construction, expectancy, and so on. Both philosophers and psychologists have discovered a host of faulty ways in which we make inferences from the data of our senses, from logical fallacies to flawed heuristics.
Nevertheless, perception is the basis for science and our ordinary knowledge of the world. What we do is try to investigate our biases and errors in reasoning and correct for them as we form our judgments about what reality is really like. Certainly grading procedures can be subject to similar investigation and improvement.
Yet the mavens of outcomes assessment do exactly the wrong thing—they pretend to have some other method that is the royal road to truth when, prey to the same doubts, it is no more than the path to ignorance.