And now for this week’s Rorschach test.
The National Institute for Learning Outcomes Assessment has just released a white paper about the regional accreditors’ role in prodding colleges to assess their students’ learning.
The paper, “Regional Accreditation and Student Learning Outcomes: Mapping the Territory,” begins with quotations from four pseudonymous college presidents who took part in a focus group last year.
All four presidents suggested that their campuses’ learning-assessment projects are fueled by Fear of Accreditors. One said that a regional accreditor “came down on us hard over assessment.” Another said, “Accreditation visit coming up. This drives what we need to do for assessment.”
Here’s where the Rorschach test kicks in.
Accountability hawks will read those presidents’ quotes and think, “See? I knew it. Colleges only get serious about student learning when they’re pressed from outside. Without that external pressure, colleges will coast along complacently. They’ll cash their tuition checks and let their faculty members pretend that they’re teaching as well as they can and let students pretend that they’re learning as well as they can.”
But skeptics will read the same quotes and say, “See? I knew it. These assessment projects are being foisted on us by bureaucrats hundreds of miles from here. Our provost is making us do this only so she can check off some box on her next accreditation report. These people know nothing, nothing, nothing about my department or my discipline.”
Whichever side of that line you fall on, you’ll probably be interested in the institute’s new paper, which was written by Staci Provezis, a project manager and research analyst at the institute’s headquarters at the University of Illinois at Urbana-Champaign.
Ms. Provezis found, not surprisingly, that all seven regional accreditors are more likely now than they were a decade ago to insist that colleges hand them evidence about student-learning outcomes. Merely telling an accreditor that you have an assessment plan is no longer enough. In the region covered by the Western Association of Schools and Colleges, Ms. Provezis reports, “almost every action letter to institutions over the last five years has required additional attention to assessment, with reasons ranging from insufficient faculty involvement to too little evidence of a plan to sustain assessment.”
True to its reputation, the Southern Association of Colleges and Schools appears to have one of the most demanding regimens. The association’s Commission on Colleges requires each college in its region to have a “quality enhancement plan” for improving some element of student learning. (Here is an example from the University of Texas at Dallas.)
The white paper gently criticizes the accreditors for failing to make sure that faculty members are involved in learning assessment. Every accreditor pays lip service to the idea of faculty involvement, Ms. Provezis writes, but their standards are “weak” when it comes to assuring that such involvement actually happens. Ms. Provezis paraphrases one accreditation leader who told her that “it would be good to know more about what would make assessment worthwhile to the faculty—for a better understanding of the source of their resistance.”
But faculty resistance to learning assessment—at least, to a certain kind of learning assessment—doesn’t seem so hard to understand. Many of the most visible and ambitious learning-assessment projects out there seem to strangely ignore the scholarly disciplines’ own internal efforts to improve teaching and learning.
More on that topic tomorrow.Return to Top