Stop Looking at Rankings. Use Academe’s Own Measures Instead.
By Richard M. FreelandSeptember 8, 2017
As the higher-education community begins the new academic year, we also prepare for the latest round of college rankings from U.S. News & World Report. We can expect coverage on which colleges have risen and which have fallen, followed by the usual laments from institutions’ presidents about how meaningless these rankings really are. My own perspective as a former university president differs, as I believe the rankings play a useful, though imperfect, role in providing information.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
As the higher-education community begins the new academic year, we also prepare for the latest round of college rankings from U.S. News & World Report. We can expect coverage on which colleges have risen and which have fallen, followed by the usual laments from institutions’ presidents about how meaningless these rankings really are. My own perspective as a former university president differs, as I believe the rankings play a useful, though imperfect, role in providing information.
But the larger concern is why, after all these years of criticism from authoritative voices and significant progress on the part of academe in responding to external demands for more transparency, the U.S. News rankings continue to play such a powerful role in shaping public perceptions of institutional quality.
The best context for thinking about the rankings is the broad movement for more accountability in higher education, which dates from at least the 1980s and reached a kind of crescendo with the Spellings Commission report of 2006. The commercial success of U.S. News’s rankings was an early signal of a craving for information that would help prospective students and families make informed choices. Indeed, one of the reasons I have defended the rankings is that a significant percentage of the metrics — most notably graduation rates, faculty qualifications, and investment in academic programs — are legitimate indicators of academic quality. But I also welcomed them because I believed they would foster competition, which would lead to a better system for informing the public about institutional quality.
That prediction has more than been fulfilled. In recent years, many more commercial publications have entered the arena with their own ranking systems. The federal government has also gotten into the act with the College Scorecard to complement its well-established Ipeds, or Integrated Postsecondary Education Data System program. More important and potentially promising have been the many efforts from within higher education to create accountability systems that allow for comparisons among institutions along academically significant dimensions, without resorting to the kind of simplistic format needed to produce an overall ranking. Critics of the rankings are right to argue that assigning colleges and universities a unique number on a linear scale is a folly.
ADVERTISEMENT
Among the most impressive of the newer accountability efforts is the Voluntary System of Accountability, sponsored in the wake of the Spellings Commission by the Association of Public and Land-Grant Universities and the American Association of State Colleges and Universities. On the private side, the National Association of Independent Colleges and Universities developed a University and College Accountability Network that includes profile data on the organization’s members, and the Council of Independent Colleges was an early and strong supporter of accountability efforts. Especially notable are a new generation of reports by state higher-education authorities providing data on each state’s public institutions, and, in the case of Illinois, private institutions as well.
These reports typically include graduation rates, net costs, scholarship possibilities, programs offered, and student diversity. Just 11 years after the Spellings report, higher education has come a long way in providing information about institutional performance to the public, though the effort remains very much a work in progress.
Why then, given all this progress, do the rankings, and especially the U.S. Newsrankings, continue to have so much influence over public opinion about colleges?
A part of the answer is the sheer marketing power of the magazine. In addition, the rankings tell readers something about institutional status, which remains an important, even decisive, consideration for many applicants who may confuse status with quality. A third factor is simplicity. The reports being produced from within higher education tend to be complex, overwhelming readers with information and requiring considerable work to compare institutional results. A final reason is that there continues to be deep resistance within academe to publishing data about what students actually learn.
This last point is especially troubling. According to an authoritative summary of the accountability movement, by the mid-1990s there was general agreement within higher education that providing information on learning outcomes needed to be a central component of any accountability system. Moreover, early iterations of the VSA included the requirement that participating colleges administer one of three widely available assessments of intellectual achievement: the College Learning Assessment, the Collegiate Assessment of Academic Proficiency or the Measure of Academic Proficiency and Performance. In the end, however, most participants backed away from this requirement or decided not to publish results. There was too much resistance (especially from faculty members) to standardized testing, combined with legitimate questions about how much results on a single test could convey about student learning at particular institutions.
ADVERTISEMENT
While many colleges have developed programs to assess student learning (often because of accreditation requirements), few systematically collect and even fewer publish quantitative data that allow readers to compare student intellectual achievement across institutional lines. Until this gap is filled, higher education’s systems of accountability will continue to be data-rich but information-poor with respect to the quality of actual learning. The public will be left to rely on commercial rankings as indicators of institutional quality
Higher education has come a long way in providing information about institutional performance to the public, though the effort remains very much a work in progress.
The most hopeful sign that academe will find a way to measure and report student learning is a project sponsored by the Association of American Colleges and Universities and the State Higher Education Executive Officers Association that seeks to evaluate student learning based on assessments of student work using trained readers to score that work from selected courses at participating institutions. This approach is attractive to faculty members because it eschews standardized tests and provides information that is helpful in guiding curricular improvements. Twelve states and 75 two-year and four-year colleges are testing this approach to determine its validity in evaluating critical thinking, written communication, and quantitative reasoning. Results are encouraging.
If this assessment method continues to hold up, it will allow both the public and policy makers to easily understand measures of student achievement at the state level and ultimately at the institutional level against national and statewide benchmarks. It will also provide college leaders and faculty members with information about their own institutions. All who care about the reputation and quality of higher education, and who would like to replace commercial rankings with a more valid system of measuring institutional quality, will benefit from this effort.
Richard M. Freeland is president emeritus of Northeastern University and a higher-education consultant.