Colleges and universities have plenty of tools, but they must learn to use them more effectively. That is how George D. Kuh describes the state of assessing what college students learn.
Mr. Kuh directs the National Institute for Learning Outcomes Assessment, a collaboration among Indiana University at Bloomington, the University of Illinois at Urbana-Champaign, and other groups like the National Center for Higher Education Management Systems. This week, the institute released “More Than You Think, Less Than We Need: Learning Outcomes Assessment in American Higher Education,” a report that describes how more than 1,500 colleges measure what students learn.
About three-quarters had developed common learning outcomes for all undergraduate students, and the most common use of assessment data was to prepare for accreditation, according to the report. Although the vast majority of institutions have systemized ways of assessing student learning, most have few, if any, staff members assigned to it, the report said. And two-thirds of provosts said more faculty engagement would help them measure learning outcomes better, while almost as many cited a need for greater assessment expertise on their campuses.
On Tuesday, The Chronicle asked Mr. Kuh, who is also a professor of higher education at Indiana, to share his thoughts on assessment.
Q. Please give us your 20,000-foot-high view of assessment at colleges and universities: What’s most encouraging? And what’s of greatest concern?
A. What’s most encouraging is that there’s plenty more going on here, certainly, than higher-education critics think. What’s still disconcerting is that I don’t see a lot of evidence of closing the loop. There’s a lot of data around, there’s some evidence it’s being used in a variety of ways, but we still don’t know if that information is being transferred in such a way as to change practices for the better. That’s still the place where we’re falling short. There are not a lot of places that are spending a lot on assessment, in terms of analyzing data through professional eyes, and people being able to give a synthesized picture of what’s going on at an institution.
Q. The institute’s just about a year old now. How would you describe its purpose?
A. Our purpose for putting together something formal was to have a vehicle to bring together a cacophony of voices working on assessment at national and regional levels. Nobody really had the resources, or the time and energy, to look across the national landscape to provide an in-depth and almost real-time profile of what this work is about, where it’s going, and to what degree it’s making a difference. Moreover, we want to learn more about what tools are appropriate for what purpose, and we want to press institutions to do more productive work with data they have, and to be appropriately transparent about what they’re learning.
Q. And what do you plan to do in the coming months?
A. One of the things we’re doing is looking at institutional Web sites to see how transparent they are with student learning outcomes. The short answer is it’s not always easy to find. So we’re going to spend some time looking at the impact of the Voluntary System of Accountability. It’s one thing for schools to sign up, it’s another to post the information and to show that they’re actually doing something with it. It’s not about posting a score on a Web site—it’s about doing something with the data.
Q. You’ve talked about how even solid assessment data may or may not make any sense to an applicant. How can colleges make information about learning outcomes more accessible and more relevant to prospective students and parents?
A. One thing is to put up samples of student work, the project that came out of an architecture design class, or a piece of writing, or a sculpture, some authentic work that would represent most of whatever majors there are on a campus. Now, this does not give you comparative information, but most of the numbers are going to be meaningless to parents and applicants, except to say that more students score above 70 at institution A than at institution B. I also think that if we can work more closely with accreditors to know what institutions have by way of measuring student learning, we could come up with a better template for the public. The challenge is that most parents of prospective students are not asking institutions for this information. There are people who are demanding that information on behalf of the public, but John Q. Public, he just doesn’t know what to look for.
Q. The report concludes that receiving faculty cooperation in documenting learning outcomes is at the top of provosts’ wish lists. What can institutions do to promote that kind of cooperation?
A. It’s not an overnight process. But we need to be thinking about what the next generation of faculty are doing, those who are now in graduate programs. We need to make assessment a part of their work. Portfolios and rubrics are starting to take hold. The important thing is to make them more effective. There are many faculty members who are doing assessment now, but not necessarily in ways that we can scale up to the institutional level. Some say, “I am doing assessment. I’m assigning grades and I’m giving a lot of feedback. What do you want from me?” What we want is for assessment to become a public, shared responsibility, so there should be departmental leadership. It should be part of faculty members’ annual review. How common is it that when a faculty member is reviewed annually, there is some attention paid to student assessment? Institutions have to value this work, and in many cases, faculties need more support for developing good assessment tools themselves. We can make a shift here, but we have to be very, very intentional about it, and have places on campuses where faculty can get help doing it.
Q. Some administrators suggest that the recession might be the crisis that forces institutions to get more serious about assessment, as leaders continue to make tough choices about where to put their money. But considering that nearly half of the provosts surveyed said that they already need more resources to conduct better assessments, what do you make of that prediction?
A. If there’s one thing institutions do even less than take information about student learning and use it to change what they do, it is to eliminate ineffective programs and services. We have an add-on mentality. So while I would like to believe that we will innovate and increase learning productivity, I don’t think this crisis will do it. The next couple of years are going to be really, really bad, and I’m not sure that’s when you get that creativity and innovation. We just don’t have enough good examples of how institutions use this data to innovate.