International university rankings have become a major force in higher education over the past decade. Institutions now highlight their standing in their advertising campaigns. Students turn to the lists for ideas on where to apply. Some governments have even invested more in higher education when their universities’ rankings don’t satisfy expectations.
Yet it’s hard to find anyone who believes that the assessments themselves are sufficiently substantive.
In response to growing criticism about ranking methodologies, several key players have pledged to shake things up by offering new or improved versions of these tables.
Last month, for example, the European Union began moving ahead in the development of a more nuanced and complex rankings system, one they say that academics will actually approve of. And the partners behind one of the two most influential systems of rankings had an acrimonious split, with each now promising to produce a superior product.
The development of rankings and their growing influence have both paralleled and fueled the internationalization of higher education, as universities have sought to benchmark themselves and compete with their counterparts around the world. A Russian ranking, compiled for the first time in 2009, that placed Moscow State University fifth in the world—ahead of Harvard and Cambridge, among other universities—drew derision even from Russian academics, and little attention elsewhere. (The two main international rankings have never placed Moscow State higher than 66th.)
Mobile Students
The pressure to measure up is driven by a number of factors. Key among them: Worldwide enrollments in higher education have jumped by more than 50 percent in the past decade, and the number of internationally mobile students seeking information about institutions in foreign countries has soared. The growing number of universities looking to forge international ties has also driven rankings fever, as they vie for partners of similar international repute.
Thus the 27-nation European Union’s decision to create a ranking system has gained worldwide attention. Similarly, news that the London-based Times Higher Education has severed ties with the company with which it has produced its compilation for the past six years is also generating quite a bit of buzz.
“There’s a lot going on right now” in the realm of international rankings, and a great deal of interest in “what it is that the European rankings are trying to do,” says Philip G. Altbach, director of the Center for International Higher Education at Boston College.
Thomas D. Parker, a senior associate at the Institute for Higher Education Policy, a Washington-based nonprofit group that studies college accountability, is intrigued as well. “It is clear that the Europeans are proposing something considerably more complex” than the two main existing compilations, he says.
As recently as 2003, there was just one international enumeration, the “Academic Ranking of World Universities,” put out for the first time that year by Shanghai Jiao Tong University’s Institute of Higher Education. Its impact and influence were immediate—and it stirred controversy from the outset, especially over the relatively poor showing of continental European universities compared with those in the United States and Britain.
Competition soon followed, in the form of rankings produced the following year by The Times Higher Education Supplement, as the British publication was then known. The table is now so highly anticipated that on the day the updated list was posted online in October, it generated more than one million hits to the publication’s site.
International rankings carry so much heft that they help shape higher-education policy in many countries. But as their influence has grown, so has dissatisfaction in many quarters with how the best-known tables are compiled. And the rankers themselves have come under increased scrutiny.
Ellen Hazelkorn of the Dublin Institute of Technology studies the impact of rankings on higher-education policy. “There’s no such thing as an objective ranking,” she says, so people end up arguing about which are the best indicators to use.
The International Observatory on Academic Rankings and Excellence, a nonprofit group that was set up last year with headquarters in Warsaw, is among the growing number of organizations seeking to track, evaluate, and, yes, rank, the rankers.
The organization, which was spun off by the International Ranking Expert Group of Unesco’s European Centre for Higher Education, is working on a questionnaire that will be used to make certain that rankers are meeting certain “minimal standards,” says Kazimierz Bilanow, the managing director. Those standards are meant to reflect the “Berlin Principles on Ranking of Higher Education Institutions,” which were endorsed in 2006 by many of the international educators, higher-education experts, and publishers who compose the observatory’s membership.
The rankers have taken note.
“They all understand they’re very vulnerable to criticism,” says Mr. Parker. “All of them are aware that they started out with pretty simple tools, and that if they’re going to satisfy anybody, they need to get a bit smarter.”
Listening to Critics
Last year Times Higher Education ended its relationship with Quacquarelli Symonds Ltd., with which it had published the tables. Phil Baty, the editor who oversees the project, describes the rankings that had been published with the company as “no longer fit for purpose” and said the newspaper was “starting from scratch.” It will publish new rankings in conjunction with the media conglomerate Thomson Reuters.
Quacquarelli Symonds will continue to publish its own World University Rankings.
“We retain the intellectual property for the existing methodology, and we also own all the data for the past six years of the rankings,” says Ben Sowter, head of the company’s intelligence unit. “The only difference, from our point of view, is that they will no longer be published with Times Higher Ed.”
Despite Times Higher Education’s name recognition, especially in academic circles, Mr. Sowter says that “their Web site has consistently underperformed in comparison to our own in terms of tracking page views and visitors to look at the results.” The rankings on Quacquarelli Symonds’s site generated 4.8 million visits last year, he adds.
For its part, Time Higher Education is focusing on producing what it says will be a much more “robust, transparent, and balanced” set of rankings. “The final methodology is still not set,” says Mr. Baty, but two key improvements are under way.
One involves the rankings’ peer-review component, which has been sharply criticized. Although there were fewer than 4,000 responses to last year’s global survey of academics, the peer-review element was still heavily weighted, contributing around 40 percent to the rankings.
“This just is not good enough.” Mr. Baty says in an e-mail exchange with The Chronicle. “For the new world rankings, we have decided to retain an element of ‘peer review,’ but we are going to ensure we have a much better sample, which will be properly targeted and properly representative of world higher-education demographics, and which will have a much higher response rate.”
The target is for a representative selection of 25,000 academics to respond to the survey. And the peer-review element is likely to be weighted much less heavily than before.
Although academics and administrators have long been skeptical of how effectively the input of independent academics was solicited, the inclusion of a peer-review dimension, which does not factor into the Shanghai rankings, had been one of the central selling points of the British tables.
Peer Review Questioned
However, a paper to be published next month in the American Journal of Education argues that, far from being an effective independent gauge of institutional reputation, peer review is highly susceptible to manipulation. The main culprits, the authors say, are previous rankings.
“Over time, the primary driver of changes in the reputation scores used by U.S. News & World Report are the U.S. News & World Report rankings themselves, even when controlling for academic and financial indicators, as well as prior reputation scores,” says Michael N. Bastedo, an associate professor of education at the University of Michigan at Ann Arbor, who is one of the paper’s co-authors. The effect is even more pronounced in international rankings, he says, because academics’ familiarity with foreign institutions is often even more attenuated and dependent on prior rankings.
Mr. Sowter, of Quacquarelli Symonds, defends its continuing emphasis on a peer-review component, adding that it seeks increased input from academics and aims to increase response numbers through measures such as translated surveys for academics in non-English-speaking institutions.
“Of all the measures that different rankings are using at a global level, from my perspective peer review is the one that is fairest to universities with different disciplines,” he says. The use of peer reviews “enables institutions with great strengths in the arts and humanities to shine in a way that they are not able to in other measures.”
This year’s retooled Times Higher Education rankings will also adjust how research excellence is measured, by reconfiguring how citations are calibrated.
“The social and economic sciences have much lower citation rates than the natural sciences, so institutions with big medical schools, for example, received a massive and unfair advantage under the old method,” to the detriment of institutions like the London School of Economics and Political Science, Mr. Baty says. The new collaboration with Thomson Reuters, which owns a large research-citation database, he notes, will allow Times Higher Education to “draw on much more sophisticated data, and we are confident that we can get this right.”
Shanghai Jiao Tong’s rankings, too, have been criticized for their heavy reliance on research citations and Nobel Prizes as measures of excellence.
Nian Cai Liu, director of the university’s Institute of Higher Education, was the force behind the original international rankings. Its methodology will stay the same, he says in an e-mail exchange, but other changes have been made.
“New rankings have been introduced” in chemistry, physics, mathematics, computer science and engineering, and economics that augment the institutional lists, says Mr. Liu, who is a professor of engineering.
The research center that he heads is part of Shanghai Jiao Tong, but, as of last year, the rankings have been published by the separate Shanghai Ranking Consultancy and have no official relationship with the university.
In response to rumors that pressure from the Chinese government—which supposedly had fielded angry questions from at least one foreign government about the performance of its country’s institutions on the high-profile assessment—had prompted Shanghai Jiao Tong to officially distance itself from the rankings, Mr. Liu says the university itself never sponsored them. He declines to comment on the rumored pressure.
Politically Driven
Politics is, in fact, a prime motivator behind the newest entrant in the global rankings market.
Frans van Vught, a former rector of the University of Twente, in the Netherlands, is one of the leaders of the European project, and he is frank about the fact that the new endeavor “is very much a politically driven thing.”
The $1.6-million budget is being paid for by the European Union, which solicited competitive bids before awarding the contract to a German-Dutch-Belgian-French consortium for “developing a ranking system that goes beyond the research performance of universities, to include elements such as teaching quality and community outreach.”
But, as Mr. van Vught emphasizes, the new project is more than just a riposte to Shanghai and Times Higher Education. “We are trying to do something very different,” he says, by steering the emphasis away from research intensity and toward a handful of other indicators.
The project will build upon a recently concluded European classification project, known as U-Map, that developed profiles of institutions in six categories: teaching and learning; the student body; research; disseminating research knowledge; international orientation; and regional engagement.
Mr. van Vught likens the resulting classification of each institution to a sunburst, with each category contributing a ray. The European ranking project will use that approach to develop a consumer-driven ranking system.
“You can create many rankings, based on your own priorities,” Mr. van Vught says. Students and business representatives, for example, may value very different qualities when they rank institutions or programs, with undergraduates focused on facilities, library hours, and student-faculty ratios.
Business and industry are also increasingly preoccupied with rankings but are more interested in measuring outcomes and assessing the quality of graduates and their capabilities in the labor market, he says.
The project is still in the design phase, as Mr. van Vught and his European colleagues design the six measurements. For all the international preoccupation with learning outcomes, for example, there is still no universally accepted way to measure them, so part of what the researchers are studying is which proxies to use for different indicators.
Around March, they will begin testing on a sample of around 150 institutions, all over the world, through Web-based questionnaires.
“A major challenge will be to make sure the information is comparable,” Mr. van Vught says. The new system is expected to be ready for a pilot test in early 2011.
The Europeans’ holistic approach reflects the widespread yearning for user-friendly rankings amid their growing acceptance as an integral part of the international higher-education landscape.
“Like it or dislike it, rankings are going to take place,” says Mr. Bilanow, of the Warsaw-based rankings observatory. “It’s not that we think that rankings are the most important thing in the world, but since they exist, we need to at least try to make them more understandable.”
Even the rankings pioneer Mr. Liu welcomes the growing competition to his precedent-setting compilation. He serves on the international panel of experts for the European project and took part in its first panel meeting, in Shanghai, in November. “We think that more and diversified rankings are good for the higher-education community and the general public in general,” he says.