University rankings organizations could soon find themselves on the receiving end of the kinds of evaluations that have made them so newsworthy and influential. At a conference here last week for academics and institutions focused on rankings, the organizer unveiled a project that would effectively rank the rankers.
The IREG Observatory on Academic Ranking and Excellence, which was created a few years ago to develop quality-control mechanisms for rankings, announced that a volunteer trial audit of two or three rankings will soon be under way.
The International Ranking Expert Group, or IREG, first met in Washington in 2004 and two years later came up with a set of principles on the ranking of higher-education institutions.
There has “always been the idea that IREG could evolve into a quality assurance” body, said Gero Federkeil, who oversees the rankings of German institutions by the CHE Centre for Higher Education Development, which co-hosted last week’s conference.
As rankings proliferate around the world, they are increasingly having a direct impact on the decisions of students, academic staff, institutions, and policy makers, but each of those groups differs in its use of rankings and the sophistication it brings to evaluating them.
Less informed groups, such as students, “don’t have a deep understanding of the limitations of rankings,” Mr. Federkeil said, and an audit would provide an assessment tool for users. The rankers themselves also need to be held accountable for possible deficits in their tabulations or methodological flaws, he said.
The audit project, which he is helping to manage, will be based closely on IREG’s principles, which emphasize clarity and openness in the purposes and goals of rankings, the design and weighting of indicators, the collection and processing of data, and the presentation of results.
“We all say that rankings should aim at delivering transparency about higher-education institutions, but we think there should be transparency about rankings too,” Mr. Federkeil said. The audit process could eventually give rise to an IREG quality label, which would amount to an identification of trustworthy rankings, thereby enhancing the credibility of rankings and improving their quality, Mr. Federkeil said.
At the Berlin meeting last week, Mr. Federkeil and Ying Cheng, of the Center for World-Class Universities at Shanghai Jiao Tong University, which produces the best-known and most influential global ranking of universities, outlined the proposed methodology and procedure for the audit. The IREG executive committee will nominate audit teams consisting of three to five people. The chair of each team must not have any formal affiliation with a ranking organization, and at least one member of the audit team must be a member of the IREG executive committee. Audits will be based on self-reported data as well as possible on-site visits, and each full audit is expected to take about five months to complete.
Skepticism and Unease
Whether the audit will actually work remains to be seen. Many of the people who attended the meeting expressed deep skepticism and unease about how effectively a rigorous and independent audit procedure could be applied.
“In principle, I think it’s a good thing,” said Ben Sowter, head of the intelligence unit at QS, which produces the QS World University Rankings. But “there is a long way to go before this audit looks like the kind of measure it needs to be.”
Still, if it eventually evolves into a widely accepted and respected quality-assurance mechanism, the audit could become a useful tool “and enable us to counter some of the criticism that we receive,” he added.
Robert J. Morse, director of data research at U.S. News & World Report, said the magazine would most likely participate in the audit, but only “after we fully understand the processes and how it’s going to be scored.”
He agreed that it is important for rankers to be held to standards and to be transparent in their work.
“We communicate very frequently with academics, but maybe we would need to also post in more detail about the mathematical processes and quality controls and other steps we take from the academic level, and that’s something that we would consider doing,” he said.
Mr. Morse and others also asked whether there would really be critical distance between the audit committee and IREG’s executive committee.
Ellen Hazelkorn, executive director of the Higher Education Policy Research Unit at the Dublin Institute of Technology and a well-known critic of the growing influence of rankings in shaping institutional and governmental policy, noted that rankings have become an intensely competitive business, and that any audit procedures would need to be clear and open enough to ensure that competitors were not pronouncing on one another’s work.
She also said that auditors should ensure that all constituencies are involved in the process, including academics, policy makers, and students.
“I think it could potentially go somewhere,” she said of the audit project. “I’m just not sure as to how it would work and who might subject themselves to it.”
Some people invoked a comparison between the proposed audit and the accreditation process in the United States, in which universities participate voluntarily, but Ms. Hazelkorn emphasized that “universities have a compulsion to participate in accreditation” in order to secure eligibility for such financial benefits as Pell Grants, and that such an incentive is absent in when it comes to rankings.
But Nian Cai Liu, dean of the Graduate School of Education at Shanghai Jiao Tong University, which began producing the Academic Ranking of World Universities in 2003, applauded the effort.
“We need something, we need to start,” he said. “I think there will be more and more rankings, but there will be in a sense be more concentration of rankings,” he predicted. Those with an IREG approval label will grow in influence, but the rest will lose significance.