For those questioning whether the institution of peer review is really the ideal form of scientific quality control, the medication Tamiflu stands as a textbook example of its failure.
Back in 1999, the U.S. Food and Drug Administration reviewed data from two peer-reviewed trials and approved Tamiflu for reducing both the risk and severity of flu. The U.S. and other governments worldwide then spent billions of dollars stockpiling it.
Now, almost two decades later and during an especially brutal flu season, the wisdom of those decisions — and the ability of the peer-reviewed publishing process to serve as a yardstick of reliability — stand in considerable doubt.
The two initial Tamiflu studies back in 1999 had described the pill as protecting against the flu in 74 percent of cases, and cutting by nearly 50 hours the duration of the flu for those already suffering.
But a comprehensive academic analysis in 2014 — covering at least 83 clinical trials of Tamiflu’s effectiveness and net benefits, many never previously published — showed Tamiflu provided an average reduction in flu symptoms of just 20 hours; no reduction in the likelihood of pneumonia, hospital admission, or complications requiring an antibiotic; and serious side effects, including nausea and vomiting. Several other studies using data not disclosed back in 1999 reached similar conclusions.
Much of the $20 billion spent worldwide on Tamiflu therefore may have been thrown away, given the very selective data that were initially made public through the peer-review process, according to the authors of the 2014 analysis, published in the medical journal BMJ.
The Tamiflu case is “an excellent example” of the serious shortcomings of using published peer-reviewed articles as a measure of scientific reliability, says David Moher, an associate professor of epidemiology and public health at the University of Ottawa.
Peer review shouldn’t be worshiped, but judged just as strictly as any other scientific tool, says Moher, who directs the Center for Journalology at the Ottawa Hospital. “If we were to replace the peer-review intervention with a clinical intervention, such as a drug, and the evidence indicated no effect despite the enormous investment in the intervention, there would be a public outcry,” he says.
The limited number of published trials on Tamiflu at the time of its 1999 approval had led some scientists to push the drug’s manufacturer, Roche, to release all its trial data on the drug. After several years, Roche complied. The drug’s approval and sales represent a “multisystem failure,” BMJ editors said in an editorial accompanying the 2014 analysis.
Roche still insists on Tamiflu’s value. A company spokesman, Bob Purcell, said in a statement to The Chronicle that Tamiflu’s regulatory approvals “were based on appropriate review of Tamiflu data.” Yet Purcell also said Roche had realized the need to be more transparent. “We have evolved our practices regarding data sharing over time,” he said in the statement.
But others say the waste uncovered in the Tamiflu saga reflects widespread problems that continue to plague academic publishing. Moher is the lead author of a September 2017 article in Nature that analyzed nearly 2,000 biomedical articles from more than 200 low-quality journals, covering data from studies involving more than two million people and 8,000 animals. More than 90 percent of the studies in the sample failed to describe even basic processes, such as their method for creating randomized test groupings.
The waste in time and money attributable to poor peer review, Moher says, is likely to be in “the billions of hours and dollars.”
Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.