Suppose that you’ve served on a faculty committee that has devised a list of collegewide learning objectives for your undergraduates.
You don’t want that list to just sit there on a Web site as a testimony to your college’s good intentions. (Right?) You want to take reasonable steps to measure whether your students are actually meeting the goals you’ve defined.
How best to do that is, of course, a highly contested question.
Some scholars urge colleges to use nationally normed tests, like the Collegiate Learning Assessment, that attempt to capture students’ critical-thinking and analytic-writing skills. Others say it is better to use student portfolios that allow students to demonstrate their skills in the context of their course work. (For a taste of that debate, see this post and the comments it engendered.)
Charles Blaich, director of Wabash College’s Center of Inquiry in the Liberal Arts, advocates an all-of-the-above approach. Colleges should use as many reasonable kinds of data as they can get their hands on, he says. The CLA and other national tests can be powerful tools, but they can’t possibly capture a college’s full range of learning objectives.
Mr. Blaich’s center is leading a new study in which 30 colleges and universities will try to synthesize multiple kinds of student-learning data.
“To the greatest extent possible, we want to help institutions use data that they already have,” Mr. Blaich says. “We don’t want them to have to create elaborate new structures for collecting data.”
Participating colleges will be welcome to use scores from CLA-style tests, and they will also be encouraged to dig deeply into their institutional data from the National Survey of Student Engagement and its ilk. But most of all, they will be expected to use materials from student course work.
“Each institution will have to figure out how it wants to do that,” Mr. Blaich says. “But we want them use stuff that students actually produce, and to use that information for assessment and improvement. We see this as a more sustainable model for colleges, something that turns down the temperature on data collection.”
Over a three-to-four-year timetable, each college will focus on one or two specific learning outcomes and experiment with using student-outcome data to improve classroom instruction.
“The issue we see with institutions,” Mr. Blaich says, “is actually finding processes to use the data that they have. That’s the biggest challenge. In a way, we’ve already got a lot of the assessment things down. We’ve got rubrics. We’ve got e-portfolios. We have all sorts of stuff out there. But we need to improve its yield. And that’s more of a political process, more of a cultural process.”
The 30 participating institutions have a variety of plans for their Wabash-study grants. Middlebury College and Kalamazoo College plan to use the project to assess and improve their senior-year capstone experiences.
At St. Lawrence University, faculty members plan to use the Wabash project to find ways to improve students’ quantitative literacy, writing and research skills, and appreciation of diverse cultures. They will study not only what goes on in the classroom, but also the effects of students’ visits to the university’s Quantitative Resource Center and other student-support services.
“I think one of the aspects of Wabash that fits with the philosophy of St. Lawrence is that it emphases the importance of pedagogy,” says R. Danielle Egan, an associate professor of gender studies who is helping coordinate the project, in an e-mail message to The Chronicle. “That is not a hard sell here. It’s central to our identity as a university.”
Ms. Egan says she understands that some faculty members “cringe (or worse)” when assessment projects arise, but she hopes that a project like this one, which is grounded in students’ course work and not tied to national tests, will win broad acceptance.
She adds that she expects the project to give students “a more transparent understanding of not only what they have been doing but why they have been doing it over their four years at SLU. I think that making learning goals transparent should help students become more intentional—or at the very least provide clarity.”
Westminster College, in Utah, will use the project to focus on some of its collegewide learning goals: “global consciousness, social responsibility, and ethical awareness.”
“That’s been the goal that we’ve wrestled with the most, as far as exactly what it entails,” says Paul K. Presson, Westminster’s associate provost for institutional research. “We’ve been trying to come up with meaningful and measurable ways of addressing consciousness and ethical awareness.”
That last comment brings us to one element of the widespread skepticism that learning-assessment projects face.
Are colleges trying to assess aspects of personal development—including student behavior outside the classroom—that really can’t or shouldn’t be measured? Is there something slightly creepy and hyperintrusive about some of this work?
“There are no chips in the neck here,” Mr. Blaich says. “I think what colleges are trying to do is to see to what extent the activities that students engage in—in terms of organizations, study abroad, and so on—to what extent are those improving things in terms of things like diversity outcomes? A liberal-arts education is meant to be a sort of seamless in-and-out-of-the-classroom environment, where all sorts of things that are going on may influence student learning. Colleges would like to get a sense of whether the activities they’re sponsoring outside of class are benefiting students as much as they hope.”