Could the peer review of the future resemble collaborative blogging? Or provide speedy decisions with online votes? As humanities editors continue to experiment with Web-based technology, two proposed online tools are highlighting disagreement over what needs fixing.
Peter H. Sigal, a blue-haired associate professor of history at Duke University, and his two fellow editors of the Hispanic American Historical Review have an ambitious design for an online, open peer-review platform for the nearly 100-year-old journal, which they took over in July. Their platform would allow submitted manuscripts to evolve over time through online comments and suggestions before editors—or maybe even online contributors—decided their fate.
The focus, Mr. Sigal says, should be an open, inclusive discussion of ideas and arguments—a “democratic production of knowledge.” Traditional peer review, he says, is exclusionary and overly concerned with the “yea or nay” of publication.
Across the Atlantic, Nicolas Espinoza, an assistant professor of philosophy at Stockholm University, disagrees and says that, in fact, the yeas or nays need to come more quickly. To hasten a process that many scholars see as too slow, his experimental digital platform would help editors find speedy reviewers and allow online-community members to weigh in quickly on a manuscript by casting breezy up or down votes, similar to social-media sites such as Reddit.
Mr. Espinoza says such a model would not only hasten acceptance decisions but also ensure that a broad range of scholars participates in the crucial yet thankless task of reviewing. Like Mr. Sigal, he hopes his model will be adopted throughout the humanities.
Though their opinions of traditional peer review aren’t mutually exclusive, their proposed solutions are, providing yet another sticking point for academics pushing for a scholarly-publishing system that can adapt with technology. Both plans are likely to meet the same resistance and skepticism that other digital-publishing experiments have faced.
A study of open peer review released this summer by a panel of scholars assembled by MediaCommons and New York University Press, and paid for by the Andrew W. Mellon Foundation, found that “open review is too often assumed by skeptics to produce inferior-quality evaluations derived from lax and/or nonexistent standards.”
To break down the resistance, the study, titled “Open Review: A Study of Contexts and Practices,” suggests that scholars in the humanities clearly state “expectations for its open-review practices.” But what are those expectations?
A Variety of Problems
Scholars and journal editors can reel off a variety of problems with traditional journal publications that they would expect a new system to fix—among them, slow reviewers, low-quality review reports, and the difficulty in finding reviewers. And just the variety of opinions scholars will have about submissions hints at trouble for designing a successful open peer-review system.
“When you get five academics together, no one is going to agree,” says Reid Andrews, a history professor at the University of Pittsburgh and former editor of the Hispanic American Historical Review. He sees open peer review as an “extreme version of the five reviewers.”
But overwhelming participation and a lack of consensus weren’t the issues when editors of the prestigious science journal Nature tested an open-review process in 2006. In that trial, which was framed as a way to “explore a more participatory approach” to peer review, the journal made submitted manuscripts freely available online and waited for comments to roll in. They didn’t. Largely seen as a failure, the trial proved that without credit for participating, reviewers are even harder to come by.
“I don’t think it has gotten harder to find reviewers,” says Mr. Andrews, “but I don’t think it was ever terribly easy.”
In 2010, Shakespeare Quarterly tried a similar open-review process designed to raise participation, with much more success. The platform allowed reviewers and authors to interact online, which encouraged public discussions and lengthy, threaded conversations about the manuscripts. The Mellon study referred to that process as “developmental editing.”
That is exactly the kind of environment Mr. Sigal hopes to generate on the new digital platform for the Hispanic American Historical Review. “We’ve found that people who are in Latin America have less of an opportunity to be involved in the conversation,” says Mr. Sigal. As digital opportunities have opened up in the past 10 years, Mr. Sigal says, he’s seen more discussions that are international. “We saw the journal as extraordinarily rigorous, but—and I don’t think anyone would dispute this—backwards in the digital realm.”
Mr. Sigal and his co-editors, John D. French and Jocelyn H. Olcott, also on the history faculty at Duke, have designed an optional open-review process similar to that of Shakespeare Quarterly. Authors who choose the open process will have their manuscripts—which the editors will pre-screen—available online for fellow scholars to review and discuss. To promote the new method, the first article the journal plans to review openly will be submitted and reviewed by hand-selected senior scholars in the field.
Mr. Sigal hopes that the manuscripts will attract extensive commentary that authors can then cite in the published version. “The idea is that knowledge production then becomes something much more collective,” Mr. Sigal says. “We think of the open peer-review process as helping both to produce knowledge in a more democratic way and to help disseminate knowledge.”
The trio of editors will begin their open-review experiment alongside new blogs discussing coming submissions and international research. They’ll explore, through trial and error, what works over the next five years in their term as editors.
At Duke, Mr. Sigal says, such collaborative approaches are encouraged and may possibly be included in the tenure-review process, if they’re successful. In presentations and discussions, Mr. Sigal says, he and his fellow editors have also found support in the broader community of scholars.
“I definitely think it’s an experiment worth trying,” says Mr. Andrews, the former editor. “It didn’t occur to us to do, I think in part, because we were caught up in the mechanics of doing it the way that it was always done. If we fell down in any area as editors,” he adds, “it was our inability to open up communication that the Internet allowed.”
Loony Reviews?
But not everyone agrees that openness in reviews would be beneficial for scholarship or that the review process should be more collaborative.
“Volunteer reviews online: I think that’s kind of loony,” says Arne L. Kalleberg, a sociologist at the University of North Carolina at Chapel Hill and editor of Social Forces. “If you get volunteers, it’s not conducive to quality.”
He’s not alone in that opinion. The Mellon report noted that open-sourced refereeing is still not embraced as a “legitimate mode of evaluation.”
“I’m skeptical that open-sourced refereeing is going to be successful,” says Brian R. Leiter, a law and philosophy professor at the University of Chicago who writes the philosophy blog Leiter Reports, where Mr. Espinoza announced his idea for a new peer-review system. “The basic reason is because editors want reports from people they trust.”
Mr. Espinoza flatly counters: “That’s a reasonable response, if what we have now was really good.” He says that the current system often provides slowly delivered, poor-quality reviews. And only a small batch of scholars seems to be trusted to review, he adds. “A lot of my colleagues don’t get asked to review.”
Editors such as Mr. Kalleberg and Mr. Andrews will readily admit that it’s common to go back to the same people repeatedly for reviews, which can mean that scholars with a record of good reviewing become overburdened by requests.
“You’re punished for good behavior,” says Daniel J. Myers, a sociologist at the University of Notre Dame who feels the review process is broken and has written about the topic for The Chronicle.
“My frustration comes from just how long it takes to get a review,” Mr. Espinoza says. “The minimum is three to four months, and you can wait up to two years. That’s just too long.”
He says editors could achieve quality control by rewarding reviewers for good reports with scores that reflected the helpfulness of their comments—like online karma.
A quick up-or-down vote by the online scholarly community could precede a formal review, he says, which would help editors decide whether or not a manuscript had potential for acceptance. He hopes to sell the platform as a tool to big, for-profit publishing houses.
A method such as Mr. Sigal’s, in which the length of review time may be just as long and more involved, “sounds very academic and old,” says Mr. Espinoza. “What we’re going to try is to provide the means for people to communicate better and spread the refereeing more effectively.” He plans to let the system evolve naturally online, he says.
That may be a key to success. The Mellon study found that “platforms for open review must be developed with an eye toward structured flexibility.”
Humanities scholars may eventually adopt an online peer-review system somewhere in between Mr. Sigal’s and Mr. Espinoza’s platforms, but for now, a range of models may be the only way to move academic publishing online.
“I’m not particularly interested in fads,” says Priscilla Wald, an English professor at Duke and editor of American Literature, “but this isn’t a fad. This is a change. As scholars, it’s our job to see how the world is changing. We need to consider the conceptual changes that are coming with this media.”