Apparently, requiring scientists to state their objectives ahead of time makes a big difference.
Around 2000, the U.S. government ordered researchers conducting clinical trials with federal money to announce ahead of time which medical question they were hoping to answer.
Before then, 57 percent of large-budget trials for cardiovascular disease attributed a positive effect to a drug or dietary supplement, according to a study published on Wednesday. After the new requirement, the success rate dropped to just 8 percent, the study found.
The difference reflects a shift away from what had been a common practice for scientists: combing through their data after a trial to find correlations between drugs and patient outcomes. Often those correlations might just randomly occur, said one of the study’s authors, Robert M. Kaplan, who is the chief science officer at the federal Agency for Healthcare Research and Quality.
“It was actually pretty common for people to just measure a lot of things” and then pick through the data, Mr. Kaplan said. “If you measure 20 things, one of them is going to be statistically significant by chance.”
The findings were published on Wednesday in the journal PLOS ONE by Mr. Kaplan and Veronica L. Irvin, an assistant professor of public health and human sciences at Oregon State University. They began their project in 2012, when both worked at the National Institutes of Health’s Office of Behavioral and Social Sciences Research.
The Irvin-Kaplan analysis was based on 55 large medical trials — with annual budgets of at least $500,000 — financed by the NIH’s National Heart, Lung, and Blood Institute. Seventeen of the 30 trial summaries published from 1970 to 2000 showed positive results for an intervention, while only two of the 25 studies published thereafter produced a positive outcome.
It’s been a longstanding problem that researchers tend to avoid reporting negative outcomes of trials. Many experts believe that tendency is driven by factors including a desire to produce results that can be published in journals. Failing to report negative results, however, deprives other medical professionals and patients of important safety and efficacy information, the authors of the study said.
Null and negative findings need to become destigmatized, Mr. Kaplan said. He cited the example of a trial at the Women’s Health Initiative that demonstrated that postmenopausal estrogen-replacement therapy did not help most women. Because of that negative result, millions fewer women now use those ineffective medicines, he said.
The problem with random findings that seem to show effectiveness, Mr. Kaplan said, is that they are often statistical fluctuations that wouldn’t be repeated in a subsequent test. In those cases, drugs or other interventions might not actually be effective.
The pre-registration of clinical trials through websites — primarily through the government’s ClinicalTrials.gov database — is part of a continuing process in which “the way we do science has really matured” over the last couple of decades, Mr. Kaplan said.
Mr. Kaplan and Ms. Irvin chose the National Heart, Lung, and Blood Institute for their study because it was one of the earliest adopters of trial-registration requirements. They are now working on a much larger analysis involving a variety of trials beyond heart disease, over a 50-year span, to see if the same trends can be found.
Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.