Criticisms of peer-reviewed medical journals have had a dominant theme in recent years: Corporate sponsorships and inducements can lead scientists to write substandard or misleading research.
But this week in Chicago, at the latest gathering of editors of major medical journals committed to improving the peer-review process, the cast of suspects appeared to be widening.
For sure, those attending the Seventh International Congress on Peer Review and Biomedical Publication got another earful about companies engaged in tactics like hiding unwelcome medical trials and sponsoring drug tests designed more to advertise their products than to rigorously evaluate them.
But the three days of presentations and debate also included some tough assessments of the roles played by the medical journals, the researchers, their universities, and others involved in the business of medical science.
Conference sessions highlighted journals’ not correcting known errors, researchers’ not sharing or archiving their data, and university review boards’ not guarding against sham medical trials.
“We’re all in the ecosystem, and it’s broken everywhere,” Kay Dickersin, a professor of epidemiology and director of the Center for Clinical Trials at the Johns Hopkins University, said in the conference’s centerpiece lecture.
Heal Thyself
Among the startling findings presented at the conference was a study of the aftermath of the case of Joachim Boldt. The German anesthesiologist was found guilty of scientific misconduct two years ago, leading 18 journals to promise that they would retract at least 88 papers he had published since 1999.
A check of those journals, summarized by Martin R. Tramèr, editor in chief of the European Journal of Anaesthesiology, in Switzerland, found that 10 percent of the papers still have not been retracted. And only about 6 percent were retracted completely and correctly, meaning done in ways that leave the original report both accessible and marked clearly as retracted, Dr. Tramèr told the conference.
Another study, presented by Yale University researchers, compared 96 published reports on medical trials with their corresponding entries in a government data registry. The researchers found data discrepancies between the two formats in all but one of the 96 cases.
Such damning indictments of the peer-review process—often described as the “gold standard” for ensuring quality in the scientific process—have been staples at the quadrennial conference, sponsored by the British medical journal BMJ and The Journal of the American Medical Association.
This year, however, the conference appeared to be making a more concerted attempt to identify culprits and solve problems beyond just industry, several participants said. One presentation, by Christine Laine, editor of the Annals of Internal Medicine, showed that a majority of medical researchers are unwilling to share their data and other materials, without conditions, to let others verify findings.
“Changing the culture is something we’re all struggling with now,” Dr. Laine said.
The journals have helped introduce some changes over the years, including the creation of trial registries, in which researchers declare their study goals ahead of time, to prevent reports that emphasize minor unexpected benefits of a treatment while ignoring more-significant shortcomings. Some have also required that data in company-sponsored medical trials be independently verified by university researchers before publication.
Initiatives suggested at this week’s conference included subjecting all articles to better statistical review, creating automated systems for extracting data from past journal articles, and centralizing relevant information discovered in legal proceedings.
There were also proposals to directly link articles to information in letters and comments, to give more attention to compiling data from case studies, and to encourage better reporting of nonpharmacological medical interventions. University review boards were urged to guard more carefully against medical trials’ being done in ways that suggest their sponsors are more interested in marketing than in answering scientific questions.
A debate on that problem suggested great difficulty in discerning nonscientific intent, with one participant suggesting universities might get help from colleagues in fields like criminology and sociology.
‘A Word From Our Sponsor’
The search for behavioral improvements beyond industry was evident from the conference’s introductory address, by John P.A. Ioannidis, a professor of health research and policy at Stanford University. Dr. Ioannidis wrote a 2005 analysis concluding that most published research findings were not accurate, and he acknowledged that he had been among a series of conference speakers in the past year to have laid blame on industry.
One of the most hopeful new trends, Dr. Ioannidis said, can be seen in companies, like Bayer and Amgen, that are taking a harder look at basic research discoveries, with an eye toward checking if they are replicable, before investing millions of dollars in trying to convert them into a pharmaceutical. Hedge funds are also increasingly distrustful of science, and some experts wonder if the failure of the war on cancer might result from low-quality basic research, he said.
“I have to admit,” Dr. Ioannidis said, “that in the last couple of years, the industry investigators have become my heroes in many regards, and this is the pleasant surprise that I have seen—the industry rising to champion replication.”
Industry representatives at the conference also appeared more comfortable, offering rebuttals in the comment periods that followed presentations in only a handful of instances.
Mary Whitman, a senior director for medical affairs at Janssen Biotech, answered the Yale study by arguing that some of the data discrepancies the study found could stem from size limits on published articles. After an analysis that suggested a low rate of researchers’ reporting financial conflicts of interest, she rejected a suggestion that medical-journal guidelines are too lax in defining conflicts.
Beyond that, said one industry participant who declined to be identified, company representatives were pleased to let the focus stay elsewhere. Explaining, for instance, why American research is especially prone to nonreplicable findings, Dr. Ioannidis blamed the intense pressure on university scientists’ seeking to win promotions and build careers.
That kind of pressure “guarantees that the report of a randomized clinical trial becomes an optical illusion,” said the conference’s director, Drummond Rennie, a deputy editor of JAMA and adjunct professor of medicine at the University of California at San Francisco. “It shimmers between a serious scientific report and, ‘Now a word from our sponsor.’”
Dr. Rennie also encouraged scrutiny of the presentations at his own conference. The final day, on Tuesday, opened with a brief presentation by Ana Marusic, an editor in chief of the Journal of Global Health, who tallied the research output from the six previous conferences, dating to 1989. She said that only 60 percent of past conference presentations had ended up being published in peer-reviewed journals.
Some research presented at this year’s conference may also evade deeper inspection. Despite three days of speakers’ pleas for greater openness in research, conference organizers said they had not required the speakers to make publicly available the data underlying their conclusions.