It wasn’t long ago that standardized tests were ascendant in higher education.
Feeling pressure from federal policy makers and the public to demonstrate rigor in their courses, colleges turned to the tests as seemingly objective measures of quality and what students are learning.
Consumer-oriented websites sprang up. College presidents pledged to be upfront about their students’ performance on such tests. A widely cited book relied on standardized tests to support its assertions that many students failed to learn much during college.
But then momentum slowed.
A leading advocacy group for the disclosure of student-learning outcomes quietly closed. Another project has seen flagging interest. Researchers have cast doubt on the reliability of some standardized measures of learning, which are increasingly being marketed directly to students. And professors have become more interested in tools that allow them to standardize their assessment of their students’ performance on homegrown assignments instead of using outside tests.
“The pressure that institutions felt is waning,” says Natasha A. Jankowski, assistant director of the National Institute for Learning Outcomes Assessment.
But standardized tests, say several advocates of their use, are not going away. They are being used, perhaps more appropriately, for internal purposes like assessing curricula. The institute reported this year that institutions were using learning assessments that were more varied than they were four years ago, but that fewer than one-third of colleges reported the results.
The federal government’s emphasis has shifted, too. The ratings system being proposed by the Obama administration calls for measures of completion and academic progress, but it does not mandate that colleges produce measures of student learning.
That position differs from the one promoted by the George W. Bush administration in “A Test of Leadership: Charting the Future of U.S. Higher Education,” a report by the commission that was convened by Margaret Spellings, who was then secretary of education.
“Student achievement,” the authors wrote, “must be measured by institutions on a ‘value-added’ basis that takes into account students’ academic baseline when assessing their results.”
Colleges and their advocates knew they had to respond, and new efforts emerged, like the Voluntary System of Accountability, a joint project of the Association of Public and Land-Grant Universities and the American Association of State Colleges and Universities. It created a website called College Portraits, to make fonts of information about colleges’ costs and educational quality available to the public.
The idea struck a chord. In 2012, The Washington Post described the 144 colleges that posted their standardized-test scores as part of “a growing accountability movement in higher education.”
Interest at the grass-roots level was weak, though. The pages on College Portraits that were dedicated to student learning were among the website’s least popular, drawing only about 1 percent of its visitors, according to an analysis by the learning-outcomes institute. Visitors most often looked at the pages on colleges’ costs.
Prospective students and their parents may not have been interested in the subject, many observers said, because college choice is often determined by whichever institution is nearest and most affordable.
Efforts Flag
The accountability system remains active, and the website is being revised. But the number of colleges posting student-learning outcomes has dropped, from 144 two years ago to 110. Many colleges were purged for not paying dues or for inactivity, says Christine M. Keller, the system’s executive director.
The drop-off does not reflect a loss of momentum for the movement to make student-learning outcomes public on an institutional level, she says. “It’s just shifted and broadened a bit.”
Another effort, the New Leadership Alliance for Student Learning and Accountability, emerged in 2009. It issued guidelines, endorsed by 40 accreditors and associations, that staked out broad principles of assessment and accountability, which included collecting evidence of student learning through an externally validated tool, like a standardized test.
More than 100 college presidents also signed pledges, distributed by the alliance, to increase their institutions’ attention to learning. “We will be held accountable for what we say we will provide,” one president told The Chronicle in 2010.
The New York Times quoted the alliance’s executive director, David C. Paris, in a 2012 article that described the use of assessments of learning, including standardized tests, as “gaining traction.”
“There’s a real shift in attitudes under way,” he told the newspaper.
Mr. Paris left the organization later that year to join the Association of American Colleges and Universities. The alliance closed soon after.
Its demise was not the result of any lack of interest in measuring student learning, says Judith S. Eaton, who was chair of the alliance’s board. The problem was money. The alliance relied on support from private foundations and never developed a source of earned income.
Lack of Acceptance
The release in 2011 of Academically Adrift: Limited Learning on College Campuses also generated significant public conversation about academic quality. The blockbuster book looked at a range of data, including value-added measures of the Collegiate Learning Assessment, a standardized test of students’ critical-thinking skills.
The authors, Richard Arum and Josipa Roksa, concluded that 36 percent of students failed to show significant learning gains between their freshman and senior years, as reflected on the test.
Meanwhile, experts in many parts of academe were expressing growing doubts about the tests, like the CLA, the Measure of Academic Proficiency and Progress (now called the ETS Proficiency Profile), and the Collegiate Assessment of Academic Proficiency, on which such conclusions rested.
After trying the tests, many colleges found they didn’t care much for them, says Ms. Jankowski, of the learning-outcomes institute. Many of the tests “lack credibility and acceptance within a broad sweep of the higher-education community,” she wrote in 2012.
That conclusion echoed concerns raised by Trudy W. Banta, an expert in assessment and professor of higher education at Indiana University-Purdue University at Indianapolis.
Test results can suggest areas of strength and weakness in individual students and inform ways to improve teaching and curriculum, she wrote in a paper presented this month at the annual meeting of the American Educational Research Association.
But, she added, “the standardized tests of generic skills being touted today are simply not capable of fulfilling the dreams of policy makers who want to assess and compare the capacities of institutions (and nations) to improve college student learning.”
Part of the problem may be motivation. Scores on standardized tests are not tied to what students do in their courses and have no effect on their academic progress.
Students, particularly seniors, performed better on the CLA when they were paid to take the test, according to a study presented at the research association’s meeting by Jeffrey Steedle, formerly a measurement scientist at the Council for Aid to Education, which created and administers the CLA and conducts surveys.
Standardized tests alone aren’t sufficient to capture learning, says Roger Benjamin, the council’s president. Assessments of learning that are formative or directly connected to coursework are also valuable, because they gauge students’ understanding as it happens. Still, he says, tests like the CLA have the virtue of being grounded in measurement science, while formative ones do not.
“I don’t think standardized testing is going away at all,” he says.
The CLA was replaced last year with the CLA+, which can identify strengths and weaknesses of individual students. While the council is still marketing the test to institutions—300 of which administered it, according to the most recent public records—the organization’s growth as a business may well be driven by demand from individual students, Mr. Benjamin says.
The decision to appeal directly to students is also one of principle, he says.
Employers who might traditionally have relied chiefly on a college’s reputation to judge the qualifications of recent graduates can now refer to CLA+ scores as a more objective indicator of a potential hire’s critical-thinking skills.
‘Back to a Peak’
Growing disenchantment with many standardized measures of critical thinking has created enthusiasm among many observers for a new effort to measure learning in a consistent and reliable way by using the materials that students are assigned and complete in their courses.
The program, undertaken by a consortium of nine states, is using the Valid Assessment of Learning in Undergraduate Education rubrics of the Association of American Colleges and Universities.
The rubrics are divided into parts, each of which can be judged on a scale of one to four. They measure things like critical-thinking skills in a way that can be generalized and compared across departments, institutions, and states. The effort is part of a $2.3-million project supported by the Bill & Melinda Gates Foundation.
It may prove timely. The fire that was lit under colleges by the Spellings commission won’t disappear, says Stephen M. Jordan, president of Metropolitan State University of Denver, who is chair of the board of the Voluntary System of Accountability.
“People thought, ‘This is all going away now, and we’ll go back to the good old days,’” he says, describing the end of the commission.
But interest in holding colleges accountable for learning, and making the results known, he says, has simply moved from the federal government to businesses, foundations, and state governments.
“We went through this peak to a valley,” Mr. Jordan says, “and we’re going back to a peak where we see lots of activity going on.”