QS recently released its World University Rankings for 2018. QS stands for "Quacquarelli and Symonds" but I like to joke that "Quirky Silliness" is a more apt description of the list, given the results.
In the two academic fields I know best — law and philosophy — there is no "world" scholarly enterprise, and so the rankings are entirely driven by the proportion of respondents from different regions of the world, plus the halo effect of university name recognition. (I have heard similar things from colleagues in other fields.) QS does not, however, disclose the geographic distribution of its survey respondents, so the extent of the distorting effect cannot be determined.
QS is merely one player in a new industry of ranking universities on a global scale. Times Higher Education, in Britain, now produces its own rankings. A third list, the ShanghaiRankings, was started by a university in Shanghai but is now independent. Each list has problems, but QS stands out for its dubious business practices.
Indeed, QS used to partner with Times Higher Education, but the latter dropped out, expressing doubts about the QS methodology. For example, a university can recruit people to vote in its favor for the online surveys, and QS offers paid consulting services to improve a university’s ranking, as well as paid "guidance" for university leaders. Indeed, the enterprise is so brazen that it effectively sells rating approval for a price.
Not all of the world university rankings are as suspect as that. Times Higher Education eschews the conflict-ridden marketing of QS but fails to disclose the geographic distribution of its evaluators. The Shanghai rankings, by contrast, uses only "objective" metrics — Nobel Prizes (even if awarded to dead professors), publications in leading science journals, citations. As a result of those metrics, the Shanghai effort is a ranking of universities not in their entirety (since the humanities and arts are invisible), but in terms of their strength in engineering, medicine, and natural sciences (strength in the social sciences counts for a little but is swamped by the other fields).
Consider that the University of California at San Francisco — which is an outstanding medical school with some allied-health fields but doesn’t offer any other academic programs — ranks 21st in the world, according to the folks in Shanghai, while actual research universities with excellence across many disciplines (e.g., the University of Michigan at Ann Arbor and Northwestern University) rank lower.
American research institutions dominate all three global rankings, yet U.S. universities have tended to ignore the world ratings.
Global rankings of research universities are just one symptom of economic globalization. Millions of students in Asia, Africa, and South America seek advanced education; in many cases, their home countries will fund some or all of their graduate studies. These students have little way of gauging what is on offer abroad; hence the incentive for the global rankers. The university-rankings industry preys on the least-informed students and on the universities desperate for their tuition revenue — hence the incentive for an institution to recruit supporters to vote in its favor on these rankings, a vulnerability that QS has identified.
To be clear: I am not a knee-jerk opponent of rankings; far from it. For years I produced rankings of programs in both philosophy and law. In law, given the pernicious influence of the sloppy U.S. News rankings, my efforts have been uncontroversial and welcomed. In philosophy, my efforts have occupied the field, to the benefit of thousands of students and the consternation of many faculty members whose programs fare poorly or who think they "know better."
From two decades of experience, I am confident that serious assessments of the academic quality of graduate programs are an enormous asset to foreign students, to students at non-elite universities, and to students at elite universities with eccentric biases that would be unknown to the undergraduates. Of course, sometimes my rankings caused controversy, too; that is unavoidable in debates on institutional reputation. But the contrast between the way my philosophy rankings have conducted assessments and the way the world university-ranking industry does so is telling.
For not quite the past 20 years, my philosophy rankings (published by Wiley-Blackwell) were based on an online survey of leading philosophy scholars (hundreds participated in each iteration), who were asked to rank programs "overall" and then in their own areas of specialization (we covered more than 30 specialties). Evaluators were presented with nearly 100 lists of faculty members — minus their university affiliations — and asked to assess their scholarly distinction.
Of course, most evaluators could guess which list of scholars came from Harvard or Chicago. But the decline of both institutions in the philosophy rankings over the years has been proof of the value of erasing the salience of the halo effect — as has the rise of institutions like New York University and Rutgers University (now the top two philosophy programs in the United States). Both parlayed strong results in these surveys into increased institutional support.
In addition, we published a list of all the evaluators, identifying their current institution, where they got their Ph.D.s, and their areas of specialization. (Evaluators were not permitted to rate either their own campus or their graduate alma mater.) The transparency of the method and the high quality of the evaluators made these rankings the "bible of prospective graduate students," as Berkeley’s David Kirp wrote in The New York Times some years ago.
By contrast, the two world university rankings that include evaluations by academic experts — QS and Times Higher Education — use an indefensible methodology. I know, because I have participated in both.
So, given those flaws, should knowledgeable academics participate in the global surveys?
The primary reason that informed academics in the United States ought to participate is simple: Young people throughout the world seeking higher education need guidance, so some expert input is better than none.
Counting against that consideration is, of course, complicity in the suspect behavior of QS. Balancing doing some good against complicity with unethical practices is a familiar dilemma in many arenas. But here, at least, a kind of compromise is available. Professors can boycott QS but continue to supply expert evaluator input to Times Higher Education — and also lobby the latter to improve its survey practices.
To be sure, the QS results will get even weirder without American input, especially given the leading place of U.S. universities in so many fields. But American universities, still the richest in the world in most fields, can afford to ignore hucksters. Students can still turn to Times Higher Education, and at least in some fields to the Shanghai rankings, for alternative guidance.
Brian Leiter is a professor of jurisprudence and director of the Center for Law, Philosophy and Human Values at the University of Chicago.