Getting published in a humanities journal usually works like this: Submit an article, then hope for the best as the editors send it to a few hand-picked specialists for critique. The reviewers and the authors are not supposed to know one another’s identity.
But now scholars are asking whether this double-blind peer-review system is still the best way to pass judgment. The Internet makes it possible to share work with many people openly and simultaneously, so why not tap the public wisdom of a crowd? One of the top journals in literary studies, Shakespeare Quarterly, decided to put that question to the test.
For this year’s fall issue, a special publication devoted to Shakespeare and new media, the journal offered contributors the chance to take part in a partially open peer-review process. Authors could opt to post drafts of their articles online, open them up for anyone to comment on, and then revise accordingly. The editors would make the final call about what to publish (hence the “partially open” label). As far as the editors know, it’s the first time a traditional humanities journal has tried out a version of crowd-sourcing in lieu of double-blind review.
The verdict from several scholars who took part: mostly a thumbs up, with a few cautionary notes and a dollop of “It’s about time” mixed in.
“It was on the whole a successful experiment,” said Martin Mueller, a professor of English and classics at Northwestern University, who took part as a reviewer.
Michael Witmore, a professor of English at the University of Wisconsin at Madison, co-authored an article on the use of statistical analysis and a text-tagging database to reveal linguistic patterns in Shakespeare. He and his co-author “got some terrific ideas and some citations” from the comments of the six or so people who actively responded to the article, he said. “It produced a more interesting paper.”
Scholars and editors in the sciences have been trying out open peer review for some time, with not entirely rosy results. The journal Nature did a test run in 2006. In a published overview, the editors concluded the venture had not been a success. Many authors expressed interest but few participated, and the quantity and quality of the comments were disappointing.
Interviews with participants in Shakespeare Quarterly‘s open peer-review trial, however, suggest this attempt went much better than Nature‘s did. At least one participant pointed out that the humanities’ subjective, conversational tendencies may make them well suited to open review—better suited, perhaps, than the sciences.
Katherine Rowe, chair of the English department at Bryn Mawr College, guest-edited the special issue. She and the editorial board decided that the issue’s new-media theme offered a chance to investigate how scholarly authority works in a networked environment.
“This was genuinely an experiment,” Ms. Rowe said. “We didn’t know what would result.”
The journal’s publisher, the Johns Hopkins University Press, supported the idea. MediaCommons, a digital scholarly network set up to encourage such experiments and conversations, agreed to host the project. So Ms. Rowe and other editors put out a call for papers, culled submissions, then offered authors still in the running a chance to post drafts online. All accepted.
Rounding Up Experts
Ms. Rowe invited about 90 scholars, including Northwestern’s Mr. Mueller, to comment. Anybody willing to publish thoughts under his or her own name could join in, but the guest editor wanted recognized authorities as part of the field.
“‘What’s the nature of expertise?’ is one of the questions that really gets opened up by an open process,” Ms. Rowe said. “Everybody wanted to be sure that experts would be involved.” By her count, about 40 commenters, invited and self-selected, finally participated.
All told, four articles and three review essays were posted on MediaCommons during a two-month review period this past spring. (The articles and comments remain archived on the MediaCommons site.) As it turned out, all seven submissions will appear in the issue.
Some of the authors acknowledged doubts going into the experiment. Would open review be rigorous enough? Was it risky to post work in progress? But “the results were terrific,” said Mr. Witmore, who wrote his paper on Shakespearean linguistic analysis with Jonathan Hope, a reader in the English department of the University of Strathclyde, in Glasgow. “It’s very different from getting a two-paragraph reader’s report from a journal,” Mr. Witmore said. “In this case, what you get is individual readers from a wide range of subspecialties zooming in on a particular paragraph, saying ‘Tell me more about this’ or ‘Why did you do this?’ It seemed more like a dialogue.”
Another scholar, Alan Galey, submitted an article about Shakespeare and the history of information. An assistant professor on the Faculty of Information at the University of Toronto, he worried that an article vetted this way might carry less professional weight—a matter of particular concern to a junior professor going for tenure. “It was very much going on faith in a way,” he said.
Mr. Galey’s dean told him to make sure the process would be rigorous and fair. The stature of the journal also helped reassure him on that point. So did Ms. Rowe’s willingness to answer questions and her decision to invite established scholars to join in. Many crowd-sourcing experiments depend on scale, Mr. Galey pointed out, but this relied “on relationships among scholars where you know you can trust somebody. It wasn’t a Wild West by any means. It was as controlled a process as traditional peer review. It was just controlled in a different way.”
Mr. Galey wound up feeling that the experiment paid off. “I got better feedback from this process than I’ve had from any other peer-review process,” he said.
Another participant, Ayanna Thompson, an associate professor of English at Arizona State University, wrote an article focused on race in performance-based teaching of Shakespeare. She called the experiment “a fascinating process.” But she also found it stressful, and saw evidence that some old ways die hard.
Better Feedback
On the positive side, she got lots of feedback—"the equivalent of eight single-spaced pages of comments,” she told The Chronicle via e-mail. She felt that the lack of anonymity encouraged reviewers to engage with the material “in a much more thoughtful and thorough way than in blind reviews because their names were attached to the comments.” That helped her clarify and revise sections of her argument. But going through a public critique “was a stressful experience, and I kept saying to myself, ‘I’m glad I have tenure,’” she said.
She also discovered that academic status still played a role. “The people who were commenting online were very established, senior scholars. In fact, several junior scholars notified me offline that they had comments, but that they did not want to post them in case they contradicted the senior scholars,” she said. “So while the process was supposed to democratize the review process, it did keep some of the old hierarchies in place.”
Still, Ms. Thompson is glad she took part. “The revised article is much stronger for the feedback, and I feel lucky to have received so much deep engagement with my work,” she said.
David Schalkwyk is editor of the journal and director of research at the Folger Shakespeare Library. He called the experiment a success and said that Shakespeare Quarterly plans to try it again for its future special issues, whose focused topics are most likely to attract a critical mass of knowledgeable reviewers.
The system did, however, come at a cost from an editor’s perspective, he said. For instance, editors and writers had to keep tabs on the evolving discussion and build in extra time for revisions; authors also tended to make their articles longer and longer in response to multiple comments. “I think we underestimated the amount of time that was required,” Mr. Schalkwyk said.
That’s what keeps him from adopting the approach for every issue. He said, “My reservation about applying open peer review to normal issues is entirely practical. It’s not philosophical. If I could do it, I would.”
For Mr. Mueller, the reviewer from Northwestern, the journal’s experiment was welcome—and past due. “I think it will take a decade for scholarly communication to find ways of fitting itself into this new technological environment,” he said. “It takes time. The human problems always take more time to manage and solve than the technological problems.”