> Skip to content
FEATURED:
  • The Evolution of Race in Admissions
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Find a Job
    • Post a Job
Sign In
ADVERTISEMENT
Point of View
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Who’s Assessing the Assessors’ Assessors?

By  Steven Hales
March 11, 2013
Who’s Assessing the Assessors’ Assessors? 1
Michael Morgenstern for The Chronicle

Outcomes assessment is an epistemological quagmire, a problem unnoticed by many of the practice’s strongest advocates. Here’s why. Faculty members assign grades to students at the end of every course. Either (1) we know that on the whole those grades accurately measure the degree to which a student has mastered the course material and achieved the objectives of the course, or (2) we do not know. The very idea of outcomes assessment is predicated on Option 2. Unfortunately, the skepticism that drives outcomes assessment ultimately drives it to epistemic suicide.

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

Outcomes assessment is an epistemological quagmire, a problem unnoticed by many of the practice’s strongest advocates. Here’s why. Faculty members assign grades to students at the end of every course. Either (1) we know that on the whole those grades accurately measure the degree to which a student has mastered the course material and achieved the objectives of the course, or (2) we do not know. The very idea of outcomes assessment is predicated on Option 2. Unfortunately, the skepticism that drives outcomes assessment ultimately drives it to epistemic suicide.

If we know that grades measure outcomes adequately, then that’s all we need. Want to see how effective Professor Marcus’s teaching is in “Symbolic Logic”? Look at the course objectives, and check the grade distribution.

Ms. Marcus herself can look at the students’ performance to determine what’s working and what’s not working. She can use the evidence of their grades to decide whether a new textbook might be needed, more time needs to be spent on metalogical proofs, or if, in fact, she needs to change little in her current approach, since the students are mastering the material.

But how can we be sure that Ms. Marcus’s students are learning what she says they are learning? If we don’t have some evidence, some demonstration of that claim, then how can we hold her accountable for her classroom practices or the grades she gives?

We can’t use grades themselves to prove the veracity and legitimacy of grades, since that’s plainly circular, so really Option 2 is the way to go. Accordingly, we need some other tool of assessment to determine student success or failure. Let’s call this outcomes assessment. For the point of the present argument, it doesn’t matter what this tool is: multiple-choice questions, portfolios, oral examinations, and so on.

ADVERTISEMENT

Now, the outcomes-assessment tool faces the same dilemma that grades did: Either (1) we know that it accurately measures the degree to which a student has mastered the course material and achieved the objectives of the course, or (2) we do not know.

If we do know that outcomes assessment reliably measures student success, then obviously we should use it instead of grades, whose trustworthiness we just discounted. Or, better, we should replace our faulty means of assigning grades, whatever they may be, with the outcomes-assessment tool and use that to determine grades.

Now that we’re assigning grades using the outcomes-assessment tool, we do know that on the whole, these grades accurately measure the degree to which a student has mastered the course material. Outcomes assessment, as a further step, is no longer needed. As Wittgenstein wrote, we can throw away the ladder now we have climbed up on it.

On the other hand, how can we be sure whether outcomes assessment really works as advertised or has all the accuracy of a Soviet agricultural report? We still need to hold the faculty members (or outcomes-assessment officers) who devised the outcomes-assessment tool accountable for their claims and assessment practices.

We must ensure that the tool is not overly crude, imprecise, idiosyncratic, or written solely to game the bureaucracy. Obviously we can’t use the outcomes-assessment tool itself to prove its own veracity, since that, again, is circular.

ADVERTISEMENT

We’re compelled once more to be skeptics: We don’t know that the outcomes-assessment tool reliably indicates student achievement. We can’t merely assume without reason that it measures learning outcomes, and, by the same reasoning that justified outcomes assessment to start with, we need some other means of assessment to determine student success or failure. Once we use that new tool, then we can see how accurate outcomes assessment was. Let’s call this new procedure outcomes-assessment assessment.

It should be clear that we’ll need to prove the reliability of the outcomes-assessment-assessment procedure, too, and will therefore need an outcomes-assessment-assessment assessment, ad infinitum. In short, the demand that we prove the reliability of every method of gaining beliefs leads directly to a vicious regress. Ultimately we are left with skepticism: We have no knowledge at all.

The argument just given is hardly news among epistemologists; the problem of how we can trust our means of gaining beliefs goes back to Plato’s Allegory of the Cave. Just think about perception for a moment. If you doubt its general reliability, how can you test it? Not by perception, obviously. Yet any method you might propose for getting at the truth other than by perception faces the same doubts and demands for justification. The global skeptic who insists that all procedures for determining the truth be certified in advance, or confirmed by some other, independent method, tends to wind up hoist by his own petard.

Instead of improving our knowledge, the outcomes-assessment mania robs us of it by setting demands that result in skepticism. Does this mean that we have no choice but to blindly trust in the veracity of grading as a means of understanding how much students have learned? No. No more than we must blindly trust perception to deliver the truth.

Psychologists have uncovered all sorts of cognitive biases when it comes to perception—pareidolia, perceptual construction, expectancy, and so on. Both philosophers and psychologists have discovered a host of faulty ways in which we make inferences from the data of our senses, from logical fallacies to flawed heuristics.

ADVERTISEMENT

Nevertheless, perception is the basis for science and our ordinary knowledge of the world. What we do is try to investigate our biases and errors in reasoning and correct for them as we form our judgments about what reality is really like. Certainly grading procedures can be subject to similar investigation and improvement.

Yet the mavens of outcomes assessment do exactly the wrong thing—they pretend to have some other method that is the royal road to truth when, prey to the same doubts, it is no more than the path to ignorance.

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
OpinionAssessment & Accreditation
ADVERTISEMENT
ADVERTISEMENT
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin