Peer review is integral to a scholar’s journey, whether you’re trying to secure funding, present research at a conference, or publish your findings in a book or journal. Yet despite its ubiquity and importance, peer review has also been called the scholarly skill that “almost no one is teaching.” Emerging scholars must often learn about it through painstaking and costly trial and error, rather than via mentors or formal training.
At its best, peer review helps ensure the quality and suitability of new knowledge. At its worst, it’s an inefficient exercise that unfairly favors some while disadvantaging others. It can be a confusing, frustrating, and subjective process for all involved: Authors feel misunderstood, reviewers curse all the same mistakes, and editors scratch their heads when Reviewer No. 1 recommends publication as is, and Reviewer No. 2 recommends rejection.
Two key factors are contributing to the sense that today’s peer-review system is in “crisis” and badly in need of reform:
- First, the proliferation of new journals and Ph.D.s in the past two decades has increased the number of editors seeking peer reviewers and the number of academics trying to get published, respectively.
- Second, fewer and fewer academics have secure employment. As the number of adjunct academics increases, the remaining academics with a “service” component to their workload must do more with less or decline invitations to review, which affects other faculty careers.
In search of solutions, we studied a year’s worth of peer reviews from a U.S. journal in the communication field to answer three questions: What issues did peers identify during the review process? What did the reviewers’ feedback implicitly tell us about how they see their role? And how clear are reviewers and editors in giving authors a sense of which feedback is most important? Based on our findings (read the full peer-reviewed article here), we have three suggestions for current and future editors, reviewers, and authors.
First, some background. Some academics place the origins of peer review in antiquity, or in the 1600s after the invention of the printing press and the establishment of scientific journals. Others think modern peer review originated in the 1800s. Regardless of when it started, peer review has been an important mechanism for gatekeeping what counts as knowledge and who is fit to produce it.
As peer review has evolved, so, too, have the conventions that govern how to do it. Early practices relied on the knowledge and skills of individual referees rather than established criteria. Since the 1970s, explicit review criteria have become more common but by no means universal. Scholarly publishers offer how-to training guides and live webinars, but peer review remains something that most scholars learn by doing on the job.
And it remains work for which they are rarely compensated. This time-consuming and resource-intensive process relies on the generosity and goodwill of the academic community. Few journals have the resources to pay reviewers — despite academic publishers’ infamously high profit margins — and most of this form of academic labor is donated in-kind.
As a peer reviewer, your role and function are to provide a “constructive, comprehensive, evidenced, and appropriately substantial peer-review report,” and to do so in an “objective and constructive” manner. Some editors might tell you what to comment on and how to structure your review, but others will leave you entirely on your own. This raises the often-cloudy issue of what, exactly, a constructive and appropriately substantial peer review entails.
Academic Publishing
The system is in trouble. New incentives — money and more recognition — might fix it.
Our method. Scholarship about peer review within the humanities, arts, and social sciences tends to focus on the reviewers (often by surveying them), or on the authors and their perceptions of the timeliness and quality of reviews. We opted to study peer reviews themselves — a much more unique approach that we believed would uncover meaningful insights.
In 2022, a journal editor agreed to allow us to analyze a year’s worth of reviewer comments, as well as any additional feedback that the editor supplied in decision letters. All the materials were anonymized before we received them to protect the identity of the authors and reviewers.
We first identified the critical feedback and then whether the reviewers had only noted issues, or also provided guidance on how to resolve them. Lastly, we identified any language that denoted the relative importance of the feedback (e.g., was a particular change only “suggested” or deemed “essential”) or revealed the editor’s personal assessment of which comments to ignore or privilege.
Our findings. Reviewers identified 34 unique types of criticisms with the articles they assessed. We don’t have the space to list them all here, but the five most common were:
- Insufficient engagement with relevant literature
- Unsourced or insufficiently supported claims
- Inappropriate paper structure
- Failing to explain (or sufficiently) the rationale for studying a topic
- Missing or problematic research questions
In terms of how specific or helpful the reviewers were, we found that:
- Two-thirds of the time, reviewers only provided critical feedback. Suggestions on how to resolve the identified problems were only provided in about a third of the reviews.
- About half of the reviews provided big-picture feedback on the manuscript. The other half focused on proofreading, line editing, and other micro copy-editing concerns. Thus, reviewers implicitly saw their role not only as evaluating the submitted research but also helping edit how it was presented.
- In only 20 percent of the submissions did reviewers indicate which aspects of their feedback were most important. The journal editor also didn’t play an active role in mediating the reviewers’ (sometimes-conflicting) commentary. Only when a manuscript was “desk rejected” (turned down without being sent out for review) did the editor provide brief remarks on why.
Over all, peer reviewers in our study seemed to see their role as twofold: as disciplinary experts (calling for greater engagement with the relevant literature) and as pseudo-editors (suggesting ways to improve the paper’s organization and writing, as evidenced by the high number of line edits they offered). Indeed, reviewers’ fixation with line edits calls into question how much of that type of editing should occur during peer review as opposed to being handled by the publisher’s production team postreview.
Most reviewers also saw themselves more as critics — pointing out flaws — than as guides suggesting substantive ways to improve a manuscript. Given that scholars learn how to do peer review on the job, that means they are passing on these same practices to their own students.
3 Recommendations
Based on our findings, we offer the following suggestions to better the process for all stakeholders.
Develop and communicate consistent peer-review criteria. Those criteria need to achieve the dual purposes of quality control and peer support. And peer reviewers need to be actively trained in addition to the passive resources and guidance currently available.
Setting consistent criteria would require a conversation in the academy about whether the role of peer review is to identify issues only (the “critic” role) or to also propose solutions or ways forward (the “guide” role). We also need consensus about editing terminology (what, precisely, is meant by proofreading, line editing, copy editing, and developmental or “big picture” editing) and whose role it is to engage in which type of editing.
Advice
The answer can vary from press to press, editor to editor, and project to project.
Instinct is still the way that peer-reviewed research is assessed in the journal we analyzed: No explicit criteria were provided to reviewers. Due to a lack of evidence in the scholarly literature, it’s unclear how widely that practice holds across humanities, arts, and social-science journals, but from our anecdotal experience — having collectively reviewed hundreds of submissions in the sector in multiple countries and across several decades — we estimate that structured evaluation criteria are only used about a quarter of the time.
When reviewers are presented with a blank slate, that encourages arbitrariness in evaluations. Asking reviewers to evaluate specific aspects of the submission would help foster a systematic approach that would be fairer to authors.
Reward reviewers (either financially or professionally). Provide resources or recognition to ensure quality critiques and to acknowledge the role that reviewers and editors have in shaping published research. Some third-party organizations such as Publons provide a way for academics to “record, verify, and showcase your peer-review contributions.”
Universities have a role to play here. They must sufficiently support and recognize this labor in hiring, promotion, and leadership decisions. Likewise, they should guide their faculty members on how much of this form of service to do (to make sure the service burden doesn’t fall disproportionately on the shoulders of underrepresented academics). Doing so will help to upend the older-but-still-prevalent conceptualizations of peer-reviewed research as being only the product of the submitting author(s) rather than also of the reviewers and editors who contribute to the published research.
For their part, scholarly publishers should also invest some resources into compensating reviewers and editors for their expertise and labor.
Editors and publishers need to be clearer about what they expect in submissions. For example, does a particular journal expect empirical approaches? Is its primary aim to advance theory? Does it value research-methods innovation? Or privilege comparative approaches? Such specificity and transparency are needed because reviewers in our sample seemed to, at times, express arbitrary expectations that weren’t clearly communicated in the journal’s aims and scope (e.g., one reviewer asked an author of a study focused on Japan to contrast Japanese politics to those of the United States and Britain).
Being more transparent and specific about the publisher’s aims and expectations will help potential authors determine if a journal is an appropriate fit. Likewise, being more explicit on this front reinforces to academics the qualities and skills that they need to teach their graduate students so they too can produce this kind of research.
The next time you’re invited to review a manuscript, actively consider your role in the process and your mindset. Think about peer review not as a battle or an argument to be won, but rather as a team effort in which the process improves the outcome. If the journal doesn’t provide explicit criteria to respond to, consider drafting your own, perhaps informed by some of the principles published by the Committee on Publication Ethics.
While the peer-review process can be difficult to navigate for new and seasoned scholars alike, submitting your work to journals with explicit criteria and aims could lead to more systematic and fair reviews and better publishing outcomes.