Colleges have increased the number of ways they assess their students’ learning, but their use of the data to inform changes in curriculum has not kept pace, according to a report released on Tuesday.
The trends of the past four years are going “in the right direction,” said George D. Kuh, a professor emeritus at Indiana University at Bloomington’s School of Education, who is lead author of the report, “Knowing What Students Know and Can Do,” published by the National Institute for Learning Outcomes Assessment.
But “we’re still lagging in the areas where it’s worthwhile, and that’s putting the data to use,” he said.
The report is based on a study of administrators at 1,202 two- and four-year institutions that mirror the composition of American higher education. Most of the surveys were completed by provosts and chief academic officers in 2013. The remainder were done by deans and directors of assessment.
Mr. Kuh and his co-authors also compared the results to the responses of the 725 institutions that also participated in a similar survey he and others did in 2009.
The researchers found that 84 percent of colleges have learning goals for students, up from 74 percent four years ago. The average number of approaches that institutions used to assess their students grew from three to five.
The most popular method of assessment, used by 85 percent of respondents, is student surveys, like the National Survey of Student Engagement, or Nessie, which Mr. Kuh pioneered, as well as the Community College Survey of Student Engagement, the University of California Undergraduate Experience Survey, and the Cooperative Institutional Research Program’s surveys of freshmen and seniors.
Next in popularity, at 69 percent, were rubrics, which define levels of student performance on assignments.
The survey revealed that portfolios of students’ work have become much more common as an assessment method. Only 8 percent of institutions used them four years ago, but 41 percent do so now.
The growing use of rubrics, portfolios, and classroom-based assessments shows the desire of faculty members to “capture student performance in the contexts where teaching and learning occur,” the authors wrote.
‘Most Disappointing’
The leading reason that colleges assess their students’ learning is to comply with the requirements of regional and programmatic accreditors, a trend suggesting that accreditors continue to exert influence even as they become targets of criticism.
What’s more, assessment results seldom leave the campus, the researchers found. Less than one-third of colleges post such results on their websites. “We’re still in a place where institutions, ironically, are wading into unfamiliar territory—and that is talking about our core business openly,” Mr. Kuh said.
Colleges do share and use the results of their assessments internally. They said they use the data “quite a bit” to modify their curricula and do so nearly as often to revise learning goals, review programs, and make institutional improvements.
But “quite a bit” was not often enough for the authors, who wrote that the use of evidence was not as pervasive as it needed to be. “This is by far the most disappointing finding,” they wrote.
Such disappointment reflects the higher expectations for assessment work, Mr. Kuh said. In the past, the goal of assessing learning was to “close the loop,” which meant collecting data and using the results to change teaching. Now, he said, the phrase refers to the same process but also includes measuring the outcomes to see if the new way of teaching has succeeded.
The broader changes, however, are promising, the authors wrote. Assessment is no longer seen only as a means of complying with accreditors, “but rather is motivated by a more appropriate balance of compliance and an institutional desire to improve.”
The other authors are Stanley O. Ikenberry, a former president of the American Council on Education; Natasha Jankowski, assistant director of the learning-outcomes institute; and Jillian Kinzie, associate director of Nessie.