I fear writing this essay. I fear it because I’m sure that some journal editor out there is going to see my name and think, “Oh yeah, Dan Myers. We haven’t asked him for a review in a while. He’d be perfect for that paper on X.” Please, no, I beg you, a thousand times, no.
In the past month, I have been asked to review not one (which would be reasonable), or two (understandable), or three (excessive), four, five, six, or even seven manuscripts for publication, but indeed, a total of eight! While I confess to no small amount of pride in becoming what must be my discipline’s pre-eminent arbiter of scholarly quality and its gatekeeper supreme, I really must object. It’s getting impossible to produce any of my own work because I’m spending so much time assessing others’. And so far I’m only tallying journal manuscripts. On top of that, I have tenure and promotion cases, grant proposals, book manuscripts and prospectuses, and the everyday work of reading student papers and dissertation drafts (tasks for which I’m actually drawing salary).
Is this rate of review requests really necessary? Well, let’s take a look at my discipline, sociology. The American Sociological Association claims to have some 14,000 members. Let’s suppose that my past month’s review rate is the accepted standard. Furthermore, imagine that only 75 percent (10,500) of ASA members are deemed acceptable reviewers. With those numbers, the association membership could generate more than one million reviews per year! Even if we cut the review rate to four per month, we’d still be able to produce 500,000 reviews per year.
Let’s take that thought experiment one step further. Suppose a typical manuscript could claw its way through two rounds of reviews in a year (pretty speedy by today’s standards), receiving three reviews on each round. Those 500,000 reviews could therefore handle almost 84,000 manuscripts each year, or six papers for every member of the ASA. Prolific as we are, sociologists don’t produce an average of six papers a year, nor do we need a half-million journal-manuscript reviews to conduct our business each year.
Now if I were an isolated case, you could simply dismiss me as a crank and suggest that I learn to say no. Or if sociology were unique in its reviewing practices, you could just tell us to get our house in order. But I know from talking to colleagues in my own department, in other departments, and in other disciplines that I’m not all that isolated. I have many comrades (not “in arms” yet, but it is coming) who are experiencing an unbearable overload of review duties. And that is not the only problem with the review system.
Editors complain about frequent refusals from potential referees, low quality and brevity of reviews, lack of engagement with the papers’ arguments and evidence, and the ever-increasing time it takes referees to produce their reports. Authors, especially graduate students and pretenure faculty members, also worry about the increased length of the review process and consider compromising on where their manuscript is published in hopes of getting another line on their CV before hitting the job market or submitting their tenure packets.
What are the sources of these problems? First, some journal editors are asking for too many reviews of each paper. Is it really necessary to have three, four, or five reviews to make a decision? Second, journal editors are far too reluctant to “reject without review.” Many seem to reason that if a paper is submitted, it deserves to be reviewed. I disagree. By agreeing to review papers that have no chance of being published in the journal, editors are hobbling their journal’s ability to give feedback where it really counts. Journals are not social-service agencies required to provide feedback to every poor soul with a half-baked idea. There are many ways to get feedback on one’s work without submitting it to the premier journal in one’s field. Every review wasted on an unworthy paper means fewer available for the papers that really need careful attention. Likewise, editors may be giving too many “revise and resubmit” decisions. It’s nice to give authors a second chance, but the way most review processes unfold, issuing the R&R doubles the amount of review effort necessary for that paper. The paper ought to have more than just a chance—it ought to have an awfully good chance if we’re going to double the amount of work that other people are putting into it.
Editors aren’t the only ones creating the problem, of course. Authors too often submit papers to journals that are beyond their reach. Then, after the papers are rejected, the authors blindly submit them to other journals, having paid little or no attention to the critiques generated in the first submission. Reviewers write unengaged, useless reviews, requiring editors to get more reviews before making a decision. That produces an overload on other reviewers, who skim papers and write hasty reviews, or take forever to get to their eighth review request of the month.
The most likely source of the review-overload problem is that the reviewer pool has become too constricted. Editors are relying too much on the same set of reviewers. I’m guessing that many of the other 10,499 potential reviewers in the ASA are never asked to pitch in. Another segment simply refuses to review—burned out, tired of wasting time, or just plain selfish, their critical contribution to the discipline is lost.
I could go on. Graduate students must be trained and socialized to become good reviewers. Reviewers must learn and accept the role of general reader. And we must reconsider the role of new reviewers when evaluating a resubmission. But I’d rather leave those discussions to the faculty committee that someone has undoubtedly appointed to study this issue.
At this point, we don’t need more analysis and discussion. We need action. All of the above issues are contributing to an overload of reviews, and we aren’t dealing with them. It is time for our disciplinary associations, such as the ASA, to tackle the problem. And by tackle, I don’t mean they should issue long reports with a bunch of recommendations that no one will follow. Instead they should lead by example. For the journals they control, they should impose standards that editors and reviewers must follow. These should include: (1) a proportion of papers that must be rejected without review, (2) a limit on the number of reviews solicited for each paper, (3) a substantial reduction in the percentage of authors invited to resubmit, and (4) a requirement that authors who have published in, or submitted to, the journal must review manuscripts.
Draconian measures, you say? Perhaps. But maybe this is a Draco we should embrace. If not, we are going take an ailing peer-review system and kill it outright.