When Anna Poletti found out she’d had an article accepted by the journal Biography: An Interdisciplinary Quarterly, the young Australian scholar of life writing—autobiography, biography, letters, and other forms of recorded experience—was thrilled. “It got me a whole lot of attention in the field,” she says, because Biography is highly regarded by her peers.
It is not so esteemed, however, by the Australian government. A new journal-ranking plan—which helps the government determine how research money is doled out to universities—has dropped Biography from the highest ranking, A*, to a lowly C. Judged that way, an association with Biography looked like more of a career killer than a coup.
“I am actually dragging down the overall score of my unit by publishing in a C journal,” says Ms. Poletti, a lecturer in English at Monash University—her first tenure-track job.
The ranking is part of an overall evaluation system devised by the Australian Research Council, an agency that finances scholarship and innovation in the country through grants and through advice to other government departments. At stake is a share of about $1.63-billion (U.S.), which supports a multitude of activities including academic research, says Margaret M. Sheil, chief executive officer of the council. The system, called Excellence in Research for Australia, helps the government decide how much goes to a given research unit at a university.
Journal rankings are not just an Australian phenomenon. Scholars worldwide are tangling with what Ms. Poletti calls “the culture of audit.” The United States does not have such a ranking system, but U.S.-based researchers and journal editors find their work drawn into such assessments anyway.
Biography, for instance, comes out of the University of Hawaii’s Center for Biographical Research. Around the world—in Britain, South Africa, New Zealand, and elsewhere—cash-strapped governments are experimenting with schemes to measure the quality of the academic research they pay to support.
In Europe, the European Science Foundation is about to release a new round of journal rankings as part of its European Reference Index for the Humanities. Ms. Sheil, who talks regularly with assessors in other countries, has recently traveled to the United States to discuss Australia’s evaluation system.
The system covers all disciplines. In many fields, especially the sciences, the journal rankings have not been a big source of complaint. In the humanities and social sciences, though, the culture of audit has prompted anxiety, at least in some disciplines.
Scholars may find themselves caught between the desire to publish in journals their peers respect and the need to appease university administrations that want quantifiable proof of excellence. Editors of lower-ranked journals worry that low marks will scare off contributors.
And everyone, even those in charge of the process, worries about unintended consequences. The evaluation system was designed to highlight and support research fields and groups of researchers; it was not meant to be used in individual performance reviews. But that’s exactly how some Australian universities are using it.
Measuring Up
Begun in 2008, Excellence in Research for Australia is that country’s latest attempt to make the most of the research dollars it spends in an era of budget stress. The system is designed to measure output in 157 fields of research. It divides fields into eight clusters: for example, physics and chemical and earth sciences; humanities and creative arts; and social and behavioral sciences.
The assessment mechanism is complicated. It draws on multiple factors, including citation indexes and data from Australian universities on where, what, and how much their faculty members are publishing, what kinds of grants they’ve gotten, and so on. Departments or research units receive a numerical ranking, on a scale of 1 to 5. For 56 fields—mostly in the arts, humanities, and social sciences—the research council decided that it couldn’t rely on metrics like citation-data analysis alone. It asked a pool of 500 peer reviewers to assess the results in those fields. (The reviewers’ identities haven’t been made public.)
The council defines an A* journal as one “where most of the work is important (it will really shape the field) and where researchers boast about getting accepted. Acceptance rates would be low and the editorial board would be dominated by field leaders, including many from top institutions.” B journals have “a solid, though not outstanding, reputation,” with few leading researchers on their editorial boards. The C category comprises “solid, peer-reviewed journals that do not meet the criteria of the higher tiers.” Some academic journals earn no rating at all.
About 22,000 journals, many based in the United States and Europe, were ranked as part of the process. Those rankings have gotten more attention, and generated more complaints, than any other component of the evaluation system.
Craig Howes, a professor of English at the University of Hawaii-Manoa, is an editor of Biography. In the rankings’ first iteration, Biography was assigned a B. He and his fellow editors rallied Australian colleagues to petition the council for a revision. In the second version, the journal scored an A*. But in February 2010 the final version of the lists dropped Biography to a C.
According to the Australian rankings, Biography got a lower ranking than a regional American journal devoted to biography, Mr. Howes says. “We got dropped below a journal in our field that hasn’t published an issue in two years.” Biography, meanwhile, had been publishing quarterly, including a special issue in 2010 that was guest-edited by Sidonie Smith, who that year was the president of the Modern Language Association.
Mr. Howes thinks that Biography might have been hurt by the evaluation system’s emphasis on high-profile editorial boards. Biography’s editors decided almost 10 years ago to dispense with its board. In an essay published in the Journal of Scholarly Publishing in April, Mr. Howes explained why: “What we argued was that the real measure of our standing as a journal was the number of major figures in the field who appeared within our pages, and that in the nine years since we eliminated the advisory board, the appearance of such individuals has markedly increased.”
For the Australian evaluation’s purposes, however, that just means Biography doesn’t have a strong board.
“The other thing I’m sure of,” Mr. Howes wrote, “is that no one who knows anything about the journals in the field of life writing had anything to do with our current ranking, all institutional claims about panels of experts to the contrary.”
Gillian Whitlock, a professor of literature at the University of Queensland, in Australia, specializes in life narrative. She agrees with Mr. Howes that Biography got a raw deal. “There were some really embarrassing rankings,” she says. “I personally have been embarrassed about the assessment of journals in life writing.""
Ms. Whitlock has concerns but is not an ARC-basher. She notes that the research council handles many kinds of research-management issues, and that it gives Australian researchers “access to very competitive but generous funding schemes.” (She herself is the recipient of a five-year professorial fellowship from the council.) Over all, she believes, “people accept that there’s going to be some kind of quality management” system in place, and that in spite of concerns about some rankings, “there’s a good deal of admiration for what the ARC’s trying to do.”
Ms. Sheil, of the research council, believes that the evaluation process is generally sound. She comes at it from the point of view of a researcher—she trained in chemistry—and longtime university administrator. Her council has conducted its own analysis of the assessment system’s statistics, she says. This year it also held a public-comment period and is reviewing the results.
Most of the 22,000 journals on the Australian rankings were treated fairly, she argues. She won’t comment on individual cases like Biography’s. “I think we got most of it right. But there’s 200-odd that we didn’t necessarily get right in the first round,” she says. “The numbers are small, but the noise is loud.”
She does acknowledge that using an A-B-C scheme was probably a mistake, and that it may be dropped in future rankings. “The list that we had was the list for 2010, and whatever we do moving forward, it won’t be the same,” she says.
Judging Professors
Low rankings do more than wound journal editors’ pride. They can be repurposed as a way to judge individual scholars. Every year, academics at Australian universities sit down with their supervisors for performance reviews. That’s where the journal evaluation “becomes part of the personal-assessment mechanism,” Ms. Whitlock says. “Even for someone like me, who’s well established and has a large grant, you have to account for your research output.”
The ranking system was not designed to rate individuals, but “it comes to have bearing on research assessments of your work very quickly and very directly,” she says. “It’s not just a bureaucratic obstruction, it’s not just something you can shrug your shoulders and say, ‘They got that wrong.’”
Anna Poletti, the lecturer at Monash, says the Excellence in Research for Australia evaluations have created particular anxiety for early-career researchers like herself: “Certainly for us it’s a very confusing and worrisome process. And that’s partly because our senior colleagues, whom we have traditionally been able to look to for advice, are unsure about the ramifications of the ERA.”
Some colleagues have told her to ignore the journal rankings. Others have told her to change where she publishes. That’s easier said than done. “It can be quite difficult to work out where you go if you’re advised to leave your community of interest and scholarship,” she says.
Because she also works in cultural studies, which has many A and A* journals, Ms. Poletti can offset her appearances in lower-ranked venues like Biography with publications in highly ranked journals like Continuum.
Still, not only was she “really shocked to see that the process could get it so wrong,” she worries about the impact on her career as she applies for jobs and grants. The publication in Biography and a subsequent offer to co-edit an issue were “an amazing career opportunity for me, and the way the rankings were shaping up, it looked like a problem rather than the fantastic opportunity it was.”
When Ms. Poletti and other life-writing scholars asked the research council to explain the journal’s C grade, they were told that the council doesn’t discuss individual rankings. “I just thought that was astounding,” she says.
So in March 2010, she and a colleague, Simon Cooper, who teaches writing at Monash, filed a freedom-of-information request to get details about how 10 journals, including Biography, had been ranked. The council released some documentation to them last month, including “all the assessors’ comments and their recommended rankings for the journals,” according to Ms. Poletti. So far, she and Mr. Cooper have not found anything that explains why Biography went from A* to C in such a short period of time.
In an interview, Ms. Poletti wonders whether the whole research-evaluation exercise, with its emphasis on factors such as prestigious editorial boards, isn’t missing the essential point. “There are a lot of scholars in Australia saying, ‘Why this emphasis on where work appears rather than the work itself?’”
Uses and Abuses
Ms. Sheil, the council’s CEO, says she’s concerned about the unintended consequences of the research-evaluation system, particularly how it is being used as a personnel-management tool by universities. The council “didn’t want the federal government to be doing evaluations of individuals,” she says. But, she adds, “I don’t know what else I can do other than say, ‘Don’t do this.’ That wasn’t what we designed it for.”
However the research evaluations are used, and whether some other assessment system eventually replaces it, the culture of audit is here to stay. The council’s next round of rankings is scheduled to be released in 2012. Ms. Sheil suggests that adjustments will be made—perhaps monographs will count for more—but that the process is not likely to change drastically.
Now the question is whether to fight it, work with it, or work around it. The Australasian Council of Deans of Arts, Social Sciences, and Humanities, or Dassh, has expressed a number of concerns about the assessment process. “It’s quite discriminatory in favor of the sciences and against the humanities and social sciences and arts,” says Jennifer Radbourne, pro vice chancellor for arts and education at Deakin University, in Australia, and president of Dassh. “As an administrator,” she says, “my job is now to ensure that more people publish in A and A* journals.”
As humanities advocates, though, she and her council are working to make sure that the research evaluations are not “used to evaluate the role of the humanities in a university and the potential defunding of those disciplines.”
She points to Britain as an example of a place where government assessment exercises have led to significant cutbacks. “We don’t have that at the moment in Australia, but usually these things happen in a global environment,” she says.
For many scholars, especially in the humanities and social sciences, the debate over the Australian ranking system and others comes down to this: Can the culture of audit be adapted in ways that accurately measure excellence in their disciplines?
“The desire to evaluate quality is not a desire that is antithetical to scholarship, because scholarship is about good-quality work,” Ms. Poletti says. “It’s whether or not a kind of bureaucratic mode of evaluating quality actually allows the real quality to become visible.”