Erkki Ruoslahti, a professor at the Sanford Burnham Prebys Medical Discovery Institute, said that by publicly casting doubt on the cancer research, the Center for Open Science may end up harming cancer patients who would otherwise benefit from the findings.Sanford Burnham Prebys Medical Discovery Inst.
Advocates for scientific accountability this week added an alarming new finding to their score book: Many of the most important recent discoveries in cancer research cannot be replicated.
Or subscribe now to read with unlimited access for less than $10/month.
Don’t have an account? Sign up now.
A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.
If you need assistance, please contact us at 202-466-1032 or help@chronicle.com.
Erkki Ruoslahti, a professor at the Sanford Burnham Prebys Medical Discovery Institute, said that by publicly casting doubt on the cancer research, the Center for Open Science may end up harming cancer patients who would otherwise benefit from the findings.Sanford Burnham Prebys Medical Discovery Inst.
Advocates for scientific accountability this week added an alarming new finding to their score book: Many of the most important recent discoveries in cancer research cannot be replicated.
According to organizers at the Center for Open Science, replication teams often couldn’t even repeat the purported “controls” — the baseline conditions of an animal model on which a cancer intervention was to be tested.
“That really is quite shocking,” said Tim Errington, a cell biologist at the center, a nonprofit venture of University of Virginia researchers that led the cancer-biology replication project.
The project, with some $2 million in grant support, aims to repeat the key experiments in 29 of the most-cited cancer-biology papers published from 2010 to 2012. The first five of those replication efforts were published on Wednesday, and none showed meaningful reproducibility, Mr. Errington said.
Yet the center’s latest story of irreplicable science is provoking its own doubts about the value of replication efforts. Authors of the original studies question whether the focus on replication brings with it the potential for real harm.
ADVERTISEMENT
The five cases include a study, led by Erkki Ruoslahti, a professor and cancer researcher at the Sanford Burnham Prebys Medical Discovery Institute, that found a certain peptide — a piece of a protein — enhances the effectiveness of cancer drugs by helping them burrow more quickly into tumors.
Dr. Ruoslahti said in an interview that the attempt to replicate his team’s study, which was published in Science in 2010, had failed largely because the team trying to reproduce the findings did not have a properly functioning peptide.
“They never checked” that the peptide worked as it should before testing whether it would have the specified effect on tumors, Dr. Ruoslahti said. By publicly casting doubt on the finding — the Center for Open Science published the five replication efforts on Wednesday in the open-access journal eLife — the center may end up harming cancer patients, he said.
Dr. Ruoslahti provided a list of 12 other studies by other labs that subsequently affirmed the same basic reality he had discovered. “If we are right and they are wrong,” Dr. Ruoslahti said, anticipating a future in which his work is denigrated, “that means that patients did not get the benefit of this treatment. That’s the big risk.”
Impossible to Verify
Another of the five studies to face replication was led by Lynda Chin, a former professor of dermatology at Harvard Medical School who is now an associate vice chancellor in the University of Texas system. Dr. Chin’s team published a 2012 study in Nature that used test mice to conclude that a particular gene abnormality accelerated tumor growth.
ADVERTISEMENT
In her case, the replication team’s mice developed tumors far faster than her team’s mice after initially being injected with cancerous cells, making it almost impossible to verify her finding about the additional effect of the suspected gene. The problem, Dr. Chin said, was probably that the replication team had failed to engineer test mice in the exact manner that her team had done. She said her team had subsequently demonstrated the cancer-accelerating effect of the genetic mutation through a separate set of tests in which the mutation was created by genetic manipulation rather than through cell injections.
There are certain experiments where reproducibility may not be the best way to go about it.
“There are certain experiments where reproducibility,” she said, referring to attempts to validate a finding by exactly replicating a particular experiment, “may not be the best way to go about it.”
Mr. Errington doesn’t disagree with Dr. Ruoslahti and Dr. Chin that the reproducibility failures may reflect an inability to replicate control conditions. But, Mr. Errington said, that problem gets to the heart of what the Center for Open Science has been trying to demonstrate.
ADVERTISEMENT
If research reports are written without enough detail to allow an outside lab to fully reproduce the circumstances of the tests, Mr. Errington said, then no outside party can have confidence in them.
Vague writing appears to be the standard practice in science, Mr. Errington said. The result, he said, is a cascade of scientific findings, each building on previous reports, without true confidence in the underlying structure.
At worst, he said, patients can be hurt rather than helped. At best, he said, the scientific enterprise grows woefully inefficient, as scientists waste time trying to figure out how a previous experiment worked, and human trials keep testing solutions that too often don’t work.
Trivial or Important
That position was endorsed by a leader in reproducibility studies, John P.A. Ioannidis, a professor in disease prevention and health research and policy at Stanford University. If Dr. Ruoslahti’s peptide is critical to enhancing tumor-fighting drugs, then its exact nature must be described. If it’s not, perhaps it’s not such a critical ingredient, Mr. Ioannidis said. “Either way there is a problem,” he said.
ADVERTISEMENT
The researchers caught in the replication crossfire are only partly convinced. Yes, some said, they could take greater care to specify the background details of their experiments. “The truth is, there are many different ways to do it,” Dr. Chin said, referring to the creation of control conditions, “and it does matter.”
When you discover something new, you don’t really know what might be important.
“When you discover something new, you don’t really know what might be important” in the control conditions that preceded it, Dr. Ruoslahti acknowledged. “Something that seems trivial and not worth mentioning may be important.”
But in some cases, the researchers argued, the established procedures for creating something like a functioning peptide are well known, and the challenge lies less in laboriously reciting the recipe than in having the experience and skill to execute it.
ADVERTISEMENT
And given the unavoidable variability among individual human beings and mice, numerous studies by multiple labs affirming different versions of the same basic finding may be more valuable than high-precision replications of what one lab saw happen with one particular set of test subjects.
Mr. Errington isn’t persuaded. One immediate reason to question the significance of the 12 confirming studies cited by Dr. Ruoslahti, he said, is the well-known tendency among researchers and journals to publish only findings that show a positive result. The presence of those studies doesn’t rule out the possibility that 100 other labs tested for a similar effect, failed, and never told anybody about it.
That problem is being tackled, with limited success so far, by funders and other open-science advocates pressuring scientific journals to accept only studies for which the question being tested was publicly declared and registered in advance.
Proving Their Point
The Center for Open Science is among the leading advocates of study pre-registration. Perhaps a future reproducibility project could try to verify leading cancer studies by hunting for unregistered studies that disproved the effects that had been reported, said Brian A. Nosek, the center’s co-founder and director. “No one project can do everything,” he said.
ADVERTISEMENT
His reproducibility studies were financed by a $2-million grant from the Laura and John Arnold Foundation. That will not be enough to finish all 29 replication projects. But Mr. Nosek said he is not worried that funding constraints might give incentives to replicators to find problems.
We could be biased in our search for that evidence because we are human like everyone else.
“We could be biased in our search for that evidence because we are human like everyone else,” Mr. Nosek said. The project’s protections against such biases, he said, include pre-registering the reproduction studies, making the entire process transparent, and conferring with experts to refine methodologies.
That provided limited consolation to Atul J. Butte, a professor of pediatrics at the University of California at San Francisco who led another of the five studies that underwent a replication effort. Dr. Butte’s team used computer mapping to identify likely new drug benefits — such as using an ulcer drug to treat lung cancer — and then test in mice whether the theories actually work.
ADVERTISEMENT
The reproduction effort largely affirmed the study’s findings, Dr. Butte said. But the replicators then applied a different mathematical criteria for judging statistical significance, leading them to disagree on whether the overall project had proved its point. After a protest by Dr. Butte’s team, the final version of the replication study published in eLife was revised to accept some of the key points of the original study.
Dr. Butte declined to suggest that the reproducibility effort suffered from any motivation other than seeking the truth. But he expressed frustration with the possibility that reliable work could get tarnished. “Careers are on the line here,” he said.
Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.
Paul Basken was a government policy and science reporter with The Chronicle of Higher Education, where he won an annual National Press Club award for exclusives.