On Thursday the London-based Times Higher Education releases its new, and heavily hyped, World University Rankings. Nearly a year in the making, the rankings have been highly anticipated, if only to determine whether the magazine has truly delivered on its promise: to create an evaluation system based primarily on reliable, and quantifiable, measures of quality rather than on subjective values, such as reputational surveys.
Times Higher Education produced rankings for the first time this year without the collaboration of Quacquarelli Symonds Limited. Along with the Shanghai Jiao Tong University rankings, the World University Rankings that Times Higher Education and QS published together from 2004 until last year have become the most closely watched and influential university rankings in the world.
Quacquarelli Symonds has continued to produce those rankings, now called the QS World University Rankings, and is partnering with U.S. News and World Report for their publication in the United States.
The relationship between the former collaborators has deteriorated into barely veiled animosity. QS has accused Times Higher Education of unfairly disparaging the tables they once published together. This week the company threatened legal action against the magazine over what Simona Bizzozero, a QS spokeswoman, described as "factually inaccurate" and misleading statements by representatives of Times Higher Education. She said THE's role in the collaboration was limited to publishing the rankings based on a methodology that QS had developed. "What they're producing now is a brand-new exercise. A totally brand-new exercise, with absolutely no links whatsoever to what QS produced and is producing," she said. "So when they refer to their old methodology, that is not correct."
Phil Baty, editor of the rankings for Times Higher Education, declined to respond to QS's complaints: "We are now looking forward, not looking backward."
The release last week of the new QS rankings generated headlines, especially in Britain, with the displacement of Harvard as the world's top university by the University of Cambridge. QS's full list of the top 400 universities will be published next week by U.S. News.
Times Higher Education, by contrast, places Harvard in first place, followed by the California Institute of Technology, the Massachusetts Institute of Technology, Stanford, and then Princeton. Cambridge and Oxford tie for sixth place in the highest spot occupied by a university outside the United States.
"There is no question that this is a real wake-up call for the U.K.," said Mr. Baty. "This confirms, more than ever, that the U.S. has absolutely the world-class education system." He did note, however, that the data on which the rankings are based predate recent cuts in public financing for higher education in the United States.
Times Higher Education is now collaborating with the media conglomerate Thomson Reuters, which is providing the data on which its rankings are tabulated. Because the tables were produced using a new methodology, they represent "Year 1 of a new system," and "you can't make direct comparisons" with the previous rankings, said Mr. Baty.
Nonetheless, Times Higher Education is emphasizing what it describes as the increased rigor of its new methodology, which according to its news release "places less importance on reputation and heritage than in previous years and gives more weight to hard measures of excellence in all three core elements of a university's mission—research, teaching, and knowledge transfer."
Foremost among the criticisms of the previous compilation was that it relied too heavily on a reputational survey of academics, based on fewer than 4,000 responses in 2009. THE's new methodology is based on 13 indicators in five broad performance categories—teaching (weighted 30 percent); research influence as measured in citations (32.5 percent); research, based on volume, income, and reputation (30 percent); internationalization, based on student and staff ratios (5 percent); and knowledge transfer and innovation based on industry income (2.5 percent).
Times Higher Education said that the new system was the only global ranking to devote a section to teaching. The new methodology is much more evidence-based and relies far less on subjective criteria than the old tables, said Mr. Baty. But whereas teaching was previously measured based solely on student-staff ratio, the new rankings incorporate a reputational survey.
Skeptics Not Swayed
But will the Times's new system impress critics? If the reaction of two of the most outspoken and influential rankings experts is any gauge, perhaps not.
"Really, nothing has changed," said Ellen Hazelkorn, executive director of the Higher Education Policy Research Unit at the Dublin Institute of Technology, whose book "Rankings and the Battle for Worldclass Excellence: The Reshaping of Higher Education" is due to be published in March.
Despite Times Higher Education's assurances that the new tables represent a much more rigorous and reliable guide than the previous rankings, the indicators on which the new rankings are based are as problematic in their own way, she believes. The heavily weighted measure of teaching, which she described as subjective and based on reputation, introduces a new element of unreliability.
Gauging research impact through a subjective, reputation-based measure is troublesome enough, and "the reputational aspect is even more problematic once you extend it to teaching," she said.
Ms. Hazelkorn is also troubled by the role Thomson Reuters is playing through its Global Institutional Profiles Project, to which institutions provide the data used in the tables. She dislikes the fact that institutions are going to great effort and expense to compile data that the company could then sell in various ways.
"This is the monetarization of university data, like Bloomberg made money out of financial data," she said.
Geoffrey S. Boulton, a leading University of Edinburgh academic who wrote a recent report, "University Rankings: Diversity, Excellence and the European Initiative," for the League of European Research Universities, agrees that the new rankings do not represent a significant improvement. "One of the problems is that you have a system that is not well designed for purpose, and collecting more information will add nothing at all," he said.
Merely adding more detail, as he said the new rankings had done, obscures the underlying problem, which is that rankings depend on inherently unreliable proxy measures to assess the things they purport to be measuring, he said.
Coming up with an effective way of measuring teaching excellence, for example, is just one hurdle.
"I can think of lots of proxies, but the most fundamental proxy of all is the ethos and commitment of the people in the place, and how can you measure that?" asked Mr. Boulton. "The only way, in a sense, is by going to a place and sensing it, and this is not practicable and is profoundly subjective."
Unfortunately, he noted, the effect of rankings placing so much emphasis on proxies for teaching excellence, such as the number of academic staff who have Ph.D.'s, is that teaching may in fact be suffering.
The combined impact of the influence of global rankings and the weight they give to research, together with Britain's national program for allocating university financing based largely on research, mean that in British universities, "the dominant driver of activity is research, and often not research of a very high quality," said Mr. Boulton. "The consequence is that many of the best teachers have felt rather alienated."
Despite their skepticism of the rankings' inherent worth, both Ms. Hazelkorn and Mr. Boulton acknowledge that rankings are an unavoidable feature of today's higher-education landscape.
"Given that they are here to stay, they will no doubt become more elaborate, and one of the key issues is who is this going to influence and is the influence it has on them appropriate, proper, and sensible," said Mr. Boulton.