It was my ritual for seven years.
Every day, take two sets of pills—one labeled, the other a mystery. Every three months, take three sets of blood-pressure readings, twice a day for a week. Once a year, collect urine for 24 straight hours, lug it everywhere in an ice pack, then get it through airport security for a flight from Washington to Boston.
For me and about 1,000 other participants in our medical trial, the payoff for such tedious detail came back last month: The combination of the two common types of blood-pressure drugs being tested didn’t make any significant difference in the progression of our inherited kidney disease.
That was disappointing. But it didn’t necessarily mean that the trial was a failure, a waste of the time I spent on it, or a poor use of the $40-million in taxes that paid for it. The trial’s participants got top-notch medical attention for our polycystic kidney disease, and our records will almost certainly help others with PKD, now and in the future.
For instance, kidneys in PKD patients can easily double in size as our fluid-filled cysts grow. The study has now given scientists and policy makers an unprecedented amount of data relating kidney size to disease progression. That information could be critical to proving or disproving potential new therapies for PKD.
And the trial, known as HALT-PKD, may serve an unintended purpose, too. It could highlight some of the lingering inefficiencies in our nation’s standard medical-trial structures, which are in the process of getting some badly needed updates.
Randomized clinical trials are widely recognized as the gold standard for proving whether a treatment or practice really works. In our trial, everyone took two sets of daily pills. For half of the participants, the second pill was just an inert placebo. Neither the patients nor the trial doctors knew who was really getting both medications, allowing for a rigorous test of the two-drug combination.
All of that logistical structure can mean a huge financial cost. Randomized trials now account for about 20 percent of the $30-billion annual budget of the National Institutes of Health. Private drug companies spend more than $30-billion on them.
Yet drug trials fail at a rate of about 90 percent. That level of failure has attracted serious attention now that U.S. medical research has entered a period of tighter budgets, accelerating technological advances, and extensive procedural reassessments. In that light, much about our trial’s design and execution illustrates a system of human experimentation that’s ripe for overhaul.
Just the patient-travel costs of our trial totaled enough money for the NIH to have financed several standard-size research projects. That’s because the 1,044 volunteer participants could be found only with a nationwide recruitment, leading to a structure in which they were required to visit an academic medical center in one of six American cities every six months, with airfare and hotel costs reimbursed.
Given the study’s failure to show a benefit from its primary objective, it’s “a very reasonable question” to ask whether the money was well spent, admitted one of the trial’s principal investigators, Vicente E. Torres, a professor of medicine at the Mayo Clinic, in Minnesota.
Dr. Torres and other trial organizers said they firmly believe the answer is yes. The HALT-PKD trial, they explained in its immediate aftermath, needed hundreds of participants because any possible benefits of the two-drug combination were expected to be small in a disease that advances so slowly. (Patients often reach their 50s or 60s before experiencing kidney failure due to the enlarged cysts.)
By definition, a small gain might not seem worth a large expense. But even achieving a small delay in the progression of PKD would have been well worth it, Dr. Torres said. The annual cost of treating the disease, he pointed out, exceeds $2-billion in the United States alone.
And aside from testing the two-drug combination, HALT-PKD produced years of patient data showing how kidney size relates to eventual kidney failure. In particular, it provided clear evidence that lowering blood pressure, with one or both of the drugs being tested, significantly slowed the rate at which kidneys were harmed by ballooning cyst growth.
That kind of information has obvious, immediate value to patients and doctors. But it’s also helpful to future PKD treatments because the U.S. Food and Drug Administration typically approves a drug only when it can be shown to prevent a clear endpoint, such as irreversible kidney failure. For slow-developing diseases such as PKD, reaching that failure point takes too long for many trials to observe. The HALT-PKD data could help cement that link, giving future drugs a fairer chance of showing whether they deserve regulatory approval.
Still, there are somewhat-necessary inefficiencies (hundreds of people making regular flights to distant cities, for instance), and then there are less-necessary ones (failing to collect, once those people have been assembled, potentially useful data about their health). “There’s a huge wastage in the field” of large-scale human clinical trials, said one of the many experts struggling with the problem, Garret A. FitzGerald, a professor of medicine and pharmacology at the University of Pennsylvania.
Among the problems he and others cite: Medical students often aren’t trained in how best to design a clinical trial. There’s not enough permanent and coordinated infrastructure to conduct multiple trials. Systems for sharing relevant data, both inside and outside the medical community, are woefully insufficient. And both academic and industry researchers face pressures to test drugs rather than nonpharmaceutical interventions.
One of the most important questions in any trial, of course, is what medical intervention to test.
My trial, like many, was heavily shaped by testing on animals. Jared J. Grantham, an emeritus professor of nephrology and hypertension at the University of Kansas who led the creation of the PKD Foundation 30 years ago, said there were many studies prior to the trial—typically involving mice—that gave scientists hope that PKD might be slowed by the combination of two drugs. Those drugs, Lisinopril and Telmisartan, use different chemical mechanisms to block angiotensin, a hormone that raises blood pressure by constricting blood vessels.
But for many diseases, mice and other animal models are proving notoriously unreliable in predicting drugs’ effect on human beings. “From the point of view of PKD specifically, we have a number of hypotheses that have come out of the basic-science laboratories, and to a large extent the animal models often don’t exactly mimic what’s going on in people,” said Joseph V. Bonventre, a professor of medicine at Harvard University and chief of the renal unit at Brigham and Women’s Hospital.
More generally, many observers suspect researchers of becoming too enamored of their animal models.
“We’re extracting some cartoon version of the disease, and then treating it, so that the animal model becomes the focus of our research, not that actual human disease,” said Susan M. Fitzpatrick, president of the James S. McDonnell Foundation and an adjunct associate professor of neurobiology at Washington University in St. Louis. “And we learn more and more about the model, but not the disease.”
Overeager correlations to animals aren’t the only contributors to waste in medical trials. Another key problem highlighted by the HALT-PKD trial is the bias against nonpharmaceutical interventions often exhibited by both medical professionals and their patients.
Even the HALT-PKD patient-health surveys—dozens of questions posed to participants every three months—did not ask us about some basic measures of physical well-being or suspected factors in PKD progression. There were no questions about exercise and dietary habits, for example, or about water and caffeine consumption.
“It’s not that sexy to study drinking water.”
HALT-PKD organizers said they worried about overburdening trial participants and, as a result, getting unreliable responses.
But those questions could be important because heavy water consumption is widely understood to help slow PKD, and caffeine is at least suspected of stimulating cyst growth. Yet the few clinical trials of those theories have been big enough for only a dozen or so patients at most.
A bigger trial—with more than 1,400 patients, concluding in 2012—showed that the drug Tolvaptan had some benefit against PKD. Tolvaptan blocks vasopressin, a hormone that causes the body to retain water. One key effect of taking the drug is that it forces its users to drink more water. “It makes you pee like a racehorse,” said Benjamin D. Cowley Jr., professor and chief of nephrology at the University of Oklahoma. Without Tolvaptan, many PKD patients may not drink enough to get the expected benefit of plentiful water consumption, he said.
Why, then, do researchers seem to prefer costly and complicated drugs and tests? For one thing, simpler everyday interventions just aren’t professionally exciting, said Dana P. Goldman, a professor of health policy and economics at the University of Southern California. “It’s not that sexy to study drinking water,” he said.
On the corporate side of medical trials, there’s another pressure point, said Dr. Grantham. “It’s hard to get funding because you can’t patent water—it’s free,” he said.
More broadly, there may be a general bias toward bringing theories to trial. In the private sector, companies now spend about 60 percent of their entire drug-development budgets on randomized human trials, Dr. FitzGerald said. The trials’ high failure rate is driven in part by companies’ pushing for trials before they’ve taken the time to test their theories more thoroughly in the lab, he said. That may seem shortsighted, Dr. FitzGerald said, but executives sometimes appear motivated more by the short-term career boost from a trial announcement than by the fear of a negative result many years down the line.
Academics also face career incentives that lead many of them to push for trials, whether needed or not, said Robert M. Califf, vice chancellor for clinical and translational research and a professor of cardiology at Duke University. It’s a simple matter of journal publications and promotion policies that reward leadership positions, he said. “You’re not going to get promoted to associate professor by being the 299th investigator in somebody else’s clinical trial,” Dr. Califf said. “But that’s what society needs.”
Chronicle Photo by Julia Schmalz
The clinical trial I took part in may serve an unintended purpose, too. It could highlight some of the lingering inefficiencies in our nation’s standard medical-trial structures, which are in the process of getting some badly needed updates.
What might a more-efficient trial system look like? One collaboration in Chicago offers a possible way forward.
Working together, several of the city’s academic medical centers have established a joint network for conducting clinical trials. Participating institutions now routinely interview all of their hospitalized patients, regardless of diagnosis, to keep detailed records on their health status. With permission, those records are made available to researchers.
Over 15 years, the process has enrolled 100,000 patients, many of whom are then recruited for clinical trials, said David O. Meltzer, a professor of medicine and director of the Center for Health and the Social Sciences at the University of Chicago. Much of the data is collected by undergraduates, and the team has grown large enough that newcomers can be trained without the need to constantly rebuild for each new trial, Dr. Meltzer said. “It’s wildly cost-effective,” he said, “and it’s incredibly good for the students.”
Even more savings could be realized by reconsidering when trial participants are even needed. A dozen years ago, Benjamin A. Olken, a professor of economics at the Massachusetts Institute of Technology, wanted to study corruption in Indonesia, to learn which of two strategies—threatening audits of government officials or giving community members a more direct role in monitoring—would do a better job of keeping road builders from “cheating.”
Rather than track down people for extensive behavioral surveys, he attacked the problem by simply drilling test holes in highways to learn where the promised high-quality road materials had actually been used.
“That was a very concrete way of measuring” which of the two anticorruption tactics worked best, said Mary Ann Bates, deputy director of the Abdul Latif Jameel Poverty Action Lab at MIT, which encourages low-cost methods of testing scientific theories.
“You’re not going to get promoted to associate professor by being the 299th investigator in somebody else’s clinical trial.”
Another example she cites: The insurance company Aetna provided patient data to a Harvard study that asked whether eliminating prescription copayments would save money among patients who had suffered a previous heart attack. (It did.) That approach proved much easier and more reliable than trying to ask the patients themselves, Ms. Bates said. Similarly, she said, a University of Chicago study used local police and court records to prove the effectiveness of a program that taught young men to de-escalate conflicts.
“The cost of a study that uses administrative data like that can be more on the order of magnitude of hundreds of thousands of dollars, not millions,” Ms. Bates said. “And that really changes the calculation” of whether a trial is worth conducting, she said.
Those kinds of clever, paradigm-shifting alternatives may not have been possible a decade ago, when the HALT-PKD trial was designed. But the nation’s move toward electronic medical records could let researchers quickly see the effects of different drug combinations, behavioral practices, and genetic variations in patients.
Legal changes would almost certainly be necessary to make the fullest use of such databases. For instance, public and private owners of the data may need more-established structures and rules to allow wider sharing of useful information. When patients are wanted for a trial, they sometimes need better assurances against fears—real or imagined—that unexpected medical findings could affect their insurance status. And under current law, standard-format clinical trials remain a vital element of the FDA’s drug-approval process.
But reducing trial costs definitely does not mean giving up on trials, said Peter R. Orszag, a former White House budget director in the Obama administration. On the contrary, Mr. Orszag argued, far too many commonplace medical practices and procedures lack grounding in a scientifically rigorous test of their effectiveness. It’s therefore critical to make trial practices more efficient, so that the many questions needing answers in medicine can be thoroughly vetted, said Mr. Orszag, now vice chairman of corporate and investment banking at Citigroup.
The NIH, the nation’s leading provider of medical-research money to universities, is taking steps in that direction, said Kathy L. Hudson, the agency’s deputy director for science, outreach, and policy. Its newer approaches include “adaptive clinical trials,” in which trial protocols are modified as patient results are seen, and multi-university collaborations, in which a pool of shared expertise is made available for multiple trials, Ms. Hudson said.
Change is happening, Ms. Bates said. But “it could happen even more.” After watching for seven years as 1,000 patients ran through the HALT-PKD study, she said, researchers should not “be asking when we should do another $40-million trial, but when could one do that at a fraction of the cost.”
For me, the trial was a great experience, regardless of its cost or efficiency. Even though it failed in its primary objective, HALT-PKD probably provided significant benefit to PKD patients now and in the future. It gave me insight into how the clinical-trial system works—along with firsthand exposure to some really top-rate medical professionals.
But I’m just a tiny piece of the puzzle, as are the 1,000 other patients who took part in HALT-PKD. Worldwide, 12.5 million people have polycystic kidney disease. As with many conditions, the amount of money spent studying it falls well below the demonstrated cost of coping with it.
Finding a cure would be a godsend, and if that happens, it’s likely to happen because of clinical trials. Which is all the more reason to find ways to make those trials more efficient.
Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.