Beaten down by technological change and economic pressures, the long-held notion of scientific peer review is losing its status as the “gold standard” measure of scholarly reliability.
The problem facing universities in 2018, however, isn’t so much that peer review has inevitably evolved, but that scientists collectively have failed to respond with a better replacement.
Among the many troubles for peer-reviewed publications:
One result: The notion of what it means to have a highly respected “peer reviewed” work of science has become diminished, if not lost entirely. Another: Scientists caught up in uncertainties over the meaning and standards of “peer reviewed” research aren’t doing all they can to share their work and collectively advance their fields.
The solution for scientists, say analysts studying the problem, lies in helping scholars — and their employers and funders — better understand how researchers can collaborate, share, and self-correct their work, and be credited for it.
“We have a too-narrow focus on peer review at the stage of publication,” says Brian A. Nosek, co-founder and director of the Center for Open Science, “at the cost of appreciating how evidence becomes credible over time with all the other parts of continuous peer review in the community.”
Traditionally, peer review has meant the formal evaluation process of a scientist’s manuscript — by academic counterparts of the author — as a condition for journal publication.
Now, with advanced electronic methods of communication, the concept of peer review is evolving to mean any number of ways that a scientist receives useful feedback from colleagues, from the earliest stages of project design to post-publication critiques. The nonprofit Center for Open Science alone offers at least 15 pre-print servers (online repositories for publicly sharing manuscripts with no pretense of peer review) in fields that include business, education, engineering, law, and the life sciences.
Major academic publishers, including Elsevier, are also joining in, offering a variety of online tools to help scientists record and immediately share their notes and data findings with colleagues around the world.
That’s a good thing, many experts argue. “It allows us to keep going, to be current, to be at the vanguard, and to understand what’s happening,” says Harlan M. Krumholz, a professor of medicine at Yale University who studies accuracy in science.
At the same time, however, many journals and universities cling to the idea that a final published article that passes some measure of “peer review” remains a defining measure of academic accomplishment — even in the face of growing evidence that the standards of those reviews are slipping.
At last year’s quadrennial Congress on Peer Review and Scientific Publication, Krumholz called on leading academic journals to tolerate the open sharing of findings among scientists and to stop making such activity a disqualification for the eventual publication of a manuscript. “If we wait for peer-review publication,” Krumholz said of his own research team, “we’ll be years behind in the field.”
Howard C. Bauchner, editor in chief of the Journal of the American Medical Association, pushed back, saying there had not yet been enough study of whether online sharing prior to peer-reviewed publication might produce more harm than benefit in fields like medicine. Nonscientists, for example, might see a preliminary finding and act upon it, with harmful results.
“I know it always feels better if we’re more transparent, if there’s more science, if there’s more information out there,” Bauchner, a professor of pediatrics at Boston University, told Krumholz. “But I think we’ve seen, over the last 10 or 15 years, there is the real capacity to do harm.”
Amid such fundamental disagreements, there appears to be little coordinated effort to determine what, exactly, “peer review” should look like in the future. Even among journals that make a good-faith effort at peer review, there’s no common understanding of whether the process should mean a single reader giving a quick scan for obvious errors, a team of highly qualified reviewers offering multiple rounds of feedback to the author, or something in between.
That uncertainty has helped erode our collective trust in our science, says Bruce V. Lewenstein, a professor of science communication at Cornell University. The solution, in the eyes of many reformers, centers on greater openness. But in the world of academic publishing, debates over “openness” have mostly meant the push to eliminate subscription fees, rather than the openness of peer review and broader scientific processes.
Some journals are experimenting with notions of crowdsourced peer review. Pre-print servers may be the most developed form of that idea. But other variants not yet widely adopted include the open publication of exchanges between authors and their reviewers.
Advocates of that idea include Erin K. O’Shea, president of the Howard Hughes Medical Institute, who outlined the approach at a conference the institute hosted last year. Along with publishing peer reviews — either anonymously or with attribution — O’Shea called for journals to establish systems that display “robust post-publication evaluations.” She also suggested that authors, rather than editors, decide whether their manuscript is ultimately published, “removing the notion that publication itself is a quality-defining step.”
But in the end, says Michail Kovanis, a French researcher who studies ways of improving peer review, universities themselves hold the power over the future of peer review, because they control promotions and salaries, and therefore can insist on practices that reflect quality rather than quantity.
If they fail to do that, says Kovanis, a data scientist at Inserm, the French Institute of Health and Medical Research, journals will continue to grow beyond the realistic capacity of their reviewers to meaningfully evaluate scientific work. “The ones who give money,” he says, “are the ones who can enforce.”
Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.