Research

Landmark Analysis of an Infamous Medical Study Points Out the Challenges of Research Oversight

Courtesy Jon Jureidini

Jon N. Jureidini, of the U. of Adelaide, in Australia, was part of the team that conducted a two-year reanalysis of a 2001, industry-supported study of a drug now known to increase the risk of suicide in teenagers. Dr. Jureidini and his colleagues on that team hope their effort will help establish a framework for carrying out other such reviews in the future.
September 17, 2015

One teenage patient taking the new antidepressant drug Paxil sliced about a dozen six-inch-long cuts into his arm over a period of several days. Another was hospitalized after threatening to kill herself.

Neither test subject participating in the clinical trial of Paxil two decades ago was classified by a team of university researchers as suicidal. The researchers, led by Martin B. Keller of Brown University, in a medical-journal article summarizing their findings, instead endorsed Paxil as "generally well tolerated and effective for major depression in adolescents."

In the years since then, Paxil — made by GlaxoSmithKline and generically known as paroxetine — has generated many billions of dollars in sales, and become a poster child for claims of poor medical ethics. The Keller study has provoked repeated calls for retraction by the journal that published it. Demands for universities to punish those involved. Protests that one of its authors was just chosen to head the leading association of adolescent psychiatrists. And, a few years ago, a key role in the nation’s largest-ever health-care-fraud settlement.

For those reasons, a small team of researchers donated hundreds if not thousands of hours over the past two years to pore through the 275 individual patient records and compile a virtual autopsy of Dr. Keller’s 2001 report, aimed at finding out exactly how and where he and more than a dozen other distinguished university authors could have gotten their conclusions so badly wrong.

The reanalysis of Dr. Keller’s data, published on Wednesday in The BMJ, a London-based medical journal, found no single answer to that question. And its verdict on the leading factor was somewhat anticlimactic: routine professional disagreements over how exactly to classify patient behaviors.

Patients who showed some form of suicidal behavior were not included in Dr. Keller’s final count, the analysis concluded, because of failures to transcribe all adverse events from one database to another and the use of "an idiosyncratic coding system."

Such breakdowns are widely seen in clinical trials. The effect, "wittingly or unwittingly," is to hide the adverse effects of medications being tested, said an author of the analysis, Jon N. Jureidini, a professor of psychiatry and pediatrics at the University of Adelaide, in Australia.

Dr. Jureidini is one of many outspoken critics of Glaxo, Paxil, and the role of Dr. Keller’s study in helping the company sell the drug in the face of its apparent dangers. "It’s worse than even we thought," Dr. Jureidini said of his findings. "It’s pretty scary, really."

A Framework for the Future

And yet Dr. Jureidini and his colleagues also emphasized their belief that the greater value of their two-year mission was not in heaping further opprobrium on Dr. Keller and his dozen or so co-authors, but in establishing a framework for carrying out such reviews in the future.

Key to building that framework, they said, would be the routine disclosure of patient-level data from clinical trials on the scale that Glaxo was forced to make because of legal action brought by patients.

"For us, this is the main point behind the article," said one of Dr. Jureidini’s co-authors, David Healy, a professor of psychiatry at Bangor University, in Wales. "This is why we need access to the data. It is only with collaborative efforts based on full access to the data that we can manage to get to a best possible interpretation."

The BMJ authors portrayed their work as the first major effort under a framework that Dr. Healy and another set of co-authors first proposed in 2013. They had urged institutions that finance clinical trials to help outside experts obtain and reanalyze data from trials that have been abandoned, left unpublished, or called into question.

One editorial in The BMJ accompanying the reanalysis sketched out a strategy for revisiting dormant trials and warned of the potential hurdles in policies, privacy rights, and costs.

Another editorial, by Peter Doshi, an associate editor of the journal, repeated emphatic criticisms of Glaxo, Dr. Keller and his co-authors (and their universities for failing to publicly rebuke them), and the journal that published their study back in 2001, the Journal of the American Academy of Child and Adolescent Psychiatry. Mr. Doshi also described turmoil within the academy, which recently elected one of Dr. Keller’s co-authors, Karen D. Wagner, a professor of psychiatry and behavioral sciences at the University of Texas Medical Branch at Galveston, to serve as its president, beginning in 2017.

"It is often said that science self-corrects," Mr. Doshi wrote. "But for those who have been calling for a retraction of the Keller paper for many years, the system has failed."

Neither Dr. Wagner nor representatives of the Texas medical school could be reached for comment on Wednesday.

The American Academy of Child and Adolescent Psychiatry said in a written statement that it "has the utmost respect for The BMJ" and "welcomes" the overall campaign to improve clinical trials by making data available for reanalysis. The statement, issued by the academy’s current president, Paramjit T. Joshi, chief of the Division of Psychiatry and Behavioral Sciences at the Children’s National Health System, in Washington, added that the association’s journal had full editorial independence and that opinions expressed in the journal’s articles were those of the articles’ authors.

The journal’s editor in chief, Andrés S. Martin, a professor of psychiatry at Yale University, declined to comment.

Glaxo issued a written statement saying that it had cooperated fully with the BMJ reanalysis and that the company now recognizes "there is an increased risk of suicidality in pediatric and adolescent patients given antidepressants like paroxetine."

Confronting Biases

Dr. Keller contacted The Chronicle on Wednesday to insist that the 2001 results faithfully represented the best effort of the authors at the time, and that any misrepresentation of his article to help sell Paxil was the responsibility of Glaxo.

"Nothing was ever pinned on any of us," despite various trials and investigations, he said. "And when I say that, I’m not telling you we’re like the great escape artists, that we’re Houdinis and we did something wrong and we got away with the crime of the century. Don’t you think if there was really something wrong, some university or agency or something would have pinned something on us?"

In what he described as his first effort to speak publicly about the matter, Dr. Keller said his critics also have financial and professional motives for amplifying criticisms, including lawyers representing Paxil plaintiffs and professors seeking their own records of journal publication.

Two leading advocates of increased transparency and accountability in science — neither affiliated with Dr. Keller or Dr. Jureidini — said they recognized that confronting biases would need to be a key element of any system for double-checking the output of clinical trials.

Advocates of more-reliable trial results must establish clear and consistent protocols for carrying out reanalyses and improve the incentives for those doing the work, said Brian A. Nosek, a professor of psychology at the University of Virginia and director of the Center for Open Science. "We cannot expect and rely on the volunteerism of researchers like these to improve the credibility of the research literature," Mr. Nosek said.

The type of analysis published in The BMJ is highly valuable, said John P.A. Ioannidis, a professor of health research and policy at Stanford University. And for now, Dr. Ioannidis said, that kind of work will rely on volunteers such as Dr. Jureidini. "This may also entail a risk of recruiting reanalyst volunteers who even have a bias to show that the original results are wrong," he said.

Dr. Jureidini admitted as much. "We don’t think we’ve done the definitive analysis" of Dr. Keller’s report, he said. "It’s not something that can be done absolutely objectively, particularly the interpretation of harms" to patients. "We can’t protect ourselves completely from our own biases," Dr. Jureidini said.

Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.