Last month U.S. News & World Report released its 2023 college rankings. The most notable move belonged to Columbia University, which for decades had been steadily rising in the ranks toward the very top of the list. This year it tumbled from No. 2 to No. 18.
That’s better than might have been expected. After a blog post in February by Michael Thaddeus, a math professor at the university, showed that Columbia had provided fraudulent data to the magazine, U.S. News summarily unranked it. When the university was able to update only some of the data in time for the latest rankings, the editors “assigned competitive set values.”
In other words, the magazine made up data to keep a popular university in its rankings.
It’s a decision that exposes how much of the magazine’s purported objective assessment of the educational quality of institutions is built on a foundation of belief, feelings, and judgments made by editors. It also demonstrates that the driving force behind the rankings is to generate publicity and to bolster the prestige of the rankings. A steep fall for Columbia would have forced readers to question the legitimacy of the rankings and to question U.S. News’s authority.
That authority has been building since the magazine released its first college ranking nearly 40 years ago. To fully understand the void that U.S. News was filling in 1983 and the subsequent decades of problems that have followed, it’s helpful to consider the genesis of college rankings.
Attempts to quantify the academic quality of American colleges began at the start of the 20th century. That period saw rapid growth in measurement sciences (testing, ranking, etc.) and a boom in the number of colleges. Unfortunately that was also when eugenics emerged, and the overlap between measurement scientists and eugenicists at the time was significant. The two types of rankings that grew from this early scholarship helped shape U.S. News’s efforts: outcomes-based rankings and reputational rankings.
Outcomes-based rankings, in particular, have a troubled history. They are largely founded on the work of James McKeen Cattell, a psychologist — and eugenicist — at Columbia University, and Kendrick Charles Babcock, a specialist in the Bureau of Education, a precursor of the U.S. Department of Education. In 1903 Cattell created an evaluation of colleges based on how many “eminent men” were producing work on their campuses, and he used those results to devise a ranking. He also believed that the West was in decline, and that we could “improve the stock by eliminating the unfit or by favoring the endowed.”
While Cattell was sounding the alarm about the decline of “great men of science,” the Association of American Universities asked Babcock to determine which colleges best prepared their students for graduate school. The AAU believed that by working with the impartial Bureau of Education, the rankings would gain greater acceptance. However, an early draft of Babcock’s report leaked, and the ensuing backlash from lower-ranked colleges caused the sitting president, William Howard Taft, to issue an executive order to quash the report.
The other rankings methodology that emerged — one based on reputation — required soliciting well-informed opinions from expert raters about institutions or programs in their field or group. Early reputational rankings were mostly objective measures of graduate programs that were based on outcomes (production of papers, for example) rather than simply the perception of an entire institution.
U.S. News editors, by contrast, chose to base the first version of their rankings entirely on a reputational survey of 1,300 college presidents, many of whom had no familiarity with the institutions they were rating. The pool of raters eventually expanded, but problems remained. A National Opinion Research Center report commissioned by the magazine in 1997 found that, for raters, putting institutions into quartiles was “an almost impossible cognitive task.” The center also pointed out that each rater had been asked to rate a huge number of institutions — about 2,000.
In 1987, Robert Morse took over leadership of the U.S. News rankings, and his opinions on what defines a good college have determined the rankings since then. His first major change in the rankings methodology came in 1991, when the formula was expanded to include four criteria beyond reputation — graduation rates, faculty resources, financial resources, and student selectivity.
Good researchers try to limit the influence of personal opinion and bias in criteria selection. The editors at U.S. News have always seemingly done the opposite, starting with their judgment and only begrudgingly allowing the opinions of experts to influence the methodology.
U.S. News & World Report’s methodology has been shown repeatedly to advantage the wealthiest institutions and to suffer from measurement error, the difference between what we want to know and what is actually being measured. The 1997 National Opinion Research Center report stated that the weights used “lack any defensible empirical or theoretical basis,” and that “we were disturbed by how little was known about the statistical properties of the measures.” In response to the center’s recommendation that the methodology remain constant for five to seven years, U.S. News editors wrote that “we prefer to maintain our options to make small changes in the rankings model whenever we feel it will improve the quality of the results.”
Even as the rankings evolved because of criticism and feedback, Morse chose what additional criteria would be included and how much each factor would weigh, saying in 2004 that “each factor is assigned a weight that reflects our judgment about how much a measure matters.” The magazine boldly claimed in 2008 that “it relies on quantitative measures that education experts have proposed as reliable indicators of academic quality, and it is based on our nonpartisan view of what matters in education.” That statement shows the hubris of the editors — that what they think matters in education should be the guiding force.
Over and over, Morse has asserted his judgment over that of educators, researchers, and the colleges themselves, and has penalized colleges that didn’t value what the magazine ranked.
Rather than adjust the formula to account for the varied rates at which test scores were submitted, for example, the magazine arbitrarily assigned lower scores to colleges, like Sarah Lawrence, that adopted test-optional policies. For colleges like Reed, which did not submit data at all, U.S. News created data, artificially lowering the institution’s rankings. Institutions without the national publicity to garner enough peer assessments annually went unranked (in the 2010 rankings) or were “assigned values equaling the lowest average score among schools” (in the 2023 rankings).
The fallacy of the rankings is also clear in their use of “graduation-rate performance,” which is a quantification of the magazine’s prediction of a college’s graduation rate. Again, Morse has chosen to present his beliefs and judgments as facts and data.
When asked, in a recent interview, whether professionals filling out the peer-assessment survey were creating circular feedback by relying on previous years’ U.S. News rankings to provide a score for unfamiliar colleges, Morse responded, “If they’re telling you that, then that’s not our expectation of how people are doing their ratings. … We believe that there’s more thought going into it than that, but we haven’t done any kind of social-science research to prove or disprove that point.”
While Eric Gertler, chief executive of U.S. News, claims the rankings are an “objective resource to help high-school students and their families make the most well-informed decisions about college and ensure that the institutions themselves are held accountable for the education and experience they provide to their students,” the reality is much different.
Colleges will continue to engage in deception, manipulation, influence-wielding, and outright lying to shift the rankings. The rankings bring out the worst in colleges and harm the higher-education landscape. Public colleges suffer because the rankings are tilted to favor wealthier, smaller, private colleges. Colleges that choose to focus on excelling in areas outside of U.S. News’s criteria, or that don’t want to participate in the rankings at all, suffer because uninformed students decide where to enroll based partly on rankings.
That doesn’t seem to bother Morse and U.S. News — as long as they continue to sell magazines.