The number of international university rankings continues to grow, transforming a crowded and increasingly controversial field with new methodologies and new uses for rankings and the data compiled to produce them.
A report, “Global University Rankings and Their Impact II,” published on Friday by the European University Association, outlines recent developments and trends and is a follow-up to a 2011 report published by the Brussels-based group.
Since the last report, although criticism of rankings has intensified, reliance on them has expanded, and they are increasingly being used to shape institutional and public policy, the report says.
The number of rankings has also grown. For the first time last year, for example, both Times Higher Education and Quacquarelli-Symonds, better known as QS, each published a ranking of universities less than 50 years old.
The report also singles out a new comparative ranking of 48 national higher-education systems by Universitas 21, an international university network. It deems the effort “an interesting new approach,” but says the methodology could benefit from “further refinement” because of its reliance on an existing ranking “whose indicators are particularly elitist.”
The tendency of rankings to focus on elite universities is one of the main concerns in the report. The methodologies of many of the best-known rankings, including the Shanghai Jiao Tong Academic Ranking of World Universities and those produced by QS and Times Higher Education, rely heavily on publication and citation data and academic-reputation surveys.
Those methodologies “are not geared to covering large numbers of higher-education institutions, and thus cannot provide a sound basis for analyzing entire higher-education systems,” the report says.
Ellen Hazelkorn, who is director of research and enterprise at the Dublin Institute of Technology and a leading critic of rankings, says the focus on elite institutions is one of the most pernicious effects of global rankings.
“There are 17,000 higher-education institutions internationally, and we are obsessing about less than 100 of them,” she says. “The overwhelming majority of our students—good, good students—do not go to these institutions, and yet they seem to be the only ones we’re interested in.”
A ‘Welcome Development’
Another drawback of many of the rankings is that they continue to neglect the arts, humanities, and social sciences, and remain focused primarily on the scientific research produced at universities, according to the report.
Teaching is also given little weight. The report details several developments designed to counter the biases of the dominant rankings, including the advent of user-driven rankings, which allow individuals to select the criteria they value.
In what it calls a “welcome development,” the report notes that the rankers “have themselves started to draw attention to the biases and flaws in the data underpinning rankings and thus to the dangers of misusing rankings.”
Ms. Hazelkorn sees that new stance as a byproduct of the increasing competitiveness of the field, as the main players try to demonstrate that they are responsive to criticism and that their methods are transparent.
“After all, they’re trying to get higher education to be more transparent,” she says. “So they have been forced into that because of the kinds of criticism that have been made.”
One criticism is that the companies and institutions that conduct rankings increasingly stand to gain financially from the vast amounts of information that they ask universities to provide, often at great effort.
“It is ironic that the data submitted by universities free of charge is often sold back to the universities later in a processed form,” the report notes.
Indeed, universities have begun to use rankings for institutional benchmarking, in an exercise that often plays a role in strategic planning. QS, for example, offers institutions a benchmarking service that provides comparisons with other institutions.
The report concludes that rankings are a permanent fixture of the higher-education landscape, but emphasizes that “many issues relevant to academic quality cannot be measured quantitatively at all.”