Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    AI and Microcredentials
Sign In
News

An Attempt to Replicate Top Cancer Studies Casts Doubt on Reproducibility Itself

By Paul Basken January 18, 2017
Erkki Ruoslahti, a professor at the Sanford Burnham Prebys Medical Discovery Institute, said that by publicly casting doubt on the cancer research, the Center for Open Science may end up harming cancer patients who would otherwise benefit from the findings.
Erkki Ruoslahti, a professor at the Sanford Burnham Prebys Medical Discovery Institute, said that by publicly casting doubt on the cancer research, the Center for Open Science may end up harming cancer patients who would otherwise benefit from the findings.Sanford Burnham Prebys Medical Discovery Inst.

Advocates for scientific accountability this week added an alarming new finding to their score book: Many of the most important recent discoveries in cancer research cannot be replicated.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Erkki Ruoslahti, a professor at the Sanford Burnham Prebys Medical Discovery Institute, said that by publicly casting doubt on the cancer research, the Center for Open Science may end up harming cancer patients who would otherwise benefit from the findings.
Erkki Ruoslahti, a professor at the Sanford Burnham Prebys Medical Discovery Institute, said that by publicly casting doubt on the cancer research, the Center for Open Science may end up harming cancer patients who would otherwise benefit from the findings.Sanford Burnham Prebys Medical Discovery Inst.

Advocates for scientific accountability this week added an alarming new finding to their score book: Many of the most important recent discoveries in cancer research cannot be replicated.

According to organizers at the Center for Open Science, replication teams often couldn’t even repeat the purported “controls” — the baseline conditions of an animal model on which a cancer intervention was to be tested.

“That really is quite shocking,” said Tim Errington, a cell biologist at the center, a nonprofit venture of University of Virginia researchers that led the cancer-biology replication project.

The project, with some $2 million in grant support, aims to repeat the key experiments in 29 of the most-cited cancer-biology papers published from 2010 to 2012. The first five of those replication efforts were published on Wednesday, and none showed meaningful reproducibility, Mr. Errington said.

Yet the center’s latest story of irreplicable science is provoking its own doubts about the value of replication efforts. Authors of the original studies question whether the focus on replication brings with it the potential for real harm.

The five cases include a study, led by Erkki Ruoslahti, a professor and cancer researcher at the Sanford Burnham Prebys Medical Discovery Institute, that found a certain peptide — a piece of a protein — enhances the effectiveness of cancer drugs by helping them burrow more quickly into tumors.

Dr. Ruoslahti said in an interview that the attempt to replicate his team’s study, which was published in Science in 2010, had failed largely because the team trying to reproduce the findings did not have a properly functioning peptide.

“They never checked” that the peptide worked as it should before testing whether it would have the specified effect on tumors, Dr. Ruoslahti said. By publicly casting doubt on the finding — the Center for Open Science published the five replication efforts on Wednesday in the open-access journal eLife — the center may end up harming cancer patients, he said.

Dr. Ruoslahti provided a list of 12 other studies by other labs that subsequently affirmed the same basic reality he had discovered. “If we are right and they are wrong,” Dr. Ruoslahti said, anticipating a future in which his work is denigrated, “that means that patients did not get the benefit of this treatment. That’s the big risk.”

Impossible to Verify

Another of the five studies to face replication was led by Lynda Chin, a former professor of dermatology at Harvard Medical School who is now an associate vice chancellor in the University of Texas system. Dr. Chin’s team published a 2012 study in Nature that used test mice to conclude that a particular gene abnormality accelerated tumor growth.

ADVERTISEMENT

In her case, the replication team’s mice developed tumors far faster than her team’s mice after initially being injected with cancerous cells, making it almost impossible to verify her finding about the additional effect of the suspected gene. The problem, Dr. Chin said, was probably that the replication team had failed to engineer test mice in the exact manner that her team had done. She said her team had subsequently demonstrated the cancer-accelerating effect of the genetic mutation through a separate set of tests in which the mutation was created by genetic manipulation rather than through cell injections.

There are certain experiments where reproducibility may not be the best way to go about it.

“There are certain experiments where reproducibility,” she said, referring to attempts to validate a finding by exactly replicating a particular experiment, “may not be the best way to go about it.”

Mr. Errington doesn’t disagree with Dr. Ruoslahti and Dr. Chin that the reproducibility failures may reflect an inability to replicate control conditions. But, Mr. Errington said, that problem gets to the heart of what the Center for Open Science has been trying to demonstrate.

ADVERTISEMENT

If research reports are written without enough detail to allow an outside lab to fully reproduce the circumstances of the tests, Mr. Errington said, then no outside party can have confidence in them.

Vague writing appears to be the standard practice in science, Mr. Errington said. The result, he said, is a cascade of scientific findings, each building on previous reports, without true confidence in the underlying structure.

At worst, he said, patients can be hurt rather than helped. At best, he said, the scientific enterprise grows woefully inefficient, as scientists waste time trying to figure out how a previous experiment worked, and human trials keep testing solutions that too often don’t work.

Trivial or Important

That position was endorsed by a leader in reproducibility studies, John P.A. Ioannidis, a professor in disease prevention and health research and policy at Stanford University. If Dr. Ruoslahti’s peptide is critical to enhancing tumor-fighting drugs, then its exact nature must be described. If it’s not, perhaps it’s not such a critical ingredient, Mr. Ioannidis said. “Either way there is a problem,” he said.

ADVERTISEMENT

The researchers caught in the replication crossfire are only partly convinced. Yes, some said, they could take greater care to specify the background details of their experiments. “The truth is, there are many different ways to do it,” Dr. Chin said, referring to the creation of control conditions, “and it does matter.”

When you discover something new, you don’t really know what might be important.

“When you discover something new, you don’t really know what might be important” in the control conditions that preceded it, Dr. Ruoslahti acknowledged. “Something that seems trivial and not worth mentioning may be important.”

But in some cases, the researchers argued, the established procedures for creating something like a functioning peptide are well known, and the challenge lies less in laboriously reciting the recipe than in having the experience and skill to execute it.

ADVERTISEMENT

And given the unavoidable variability among individual human beings and mice, numerous studies by multiple labs affirming different versions of the same basic finding may be more valuable than high-precision replications of what one lab saw happen with one particular set of test subjects.

Mr. Errington isn’t persuaded. One immediate reason to question the significance of the 12 confirming studies cited by Dr. Ruoslahti, he said, is the well-known tendency among researchers and journals to publish only findings that show a positive result. The presence of those studies doesn’t rule out the possibility that 100 other labs tested for a similar effect, failed, and never told anybody about it.

That problem is being tackled, with limited success so far, by funders and other open-science advocates pressuring scientific journals to accept only studies for which the question being tested was publicly declared and registered in advance.

Proving Their Point

The Center for Open Science is among the leading advocates of study pre-registration. Perhaps a future reproducibility project could try to verify leading cancer studies by hunting for unregistered studies that disproved the effects that had been reported, said Brian A. Nosek, the center’s co-founder and director. “No one project can do everything,” he said.

ADVERTISEMENT

His reproducibility studies were financed by a $2-million grant from the Laura and John Arnold Foundation. That will not be enough to finish all 29 replication projects. But Mr. Nosek said he is not worried that funding constraints might give incentives to replicators to find problems.

We could be biased in our search for that evidence because we are human like everyone else.

“We could be biased in our search for that evidence because we are human like everyone else,” Mr. Nosek said. The project’s protections against such biases, he said, include pre-registering the reproduction studies, making the entire process transparent, and conferring with experts to refine methodologies.

That provided limited consolation to Atul J. Butte, a professor of pediatrics at the University of California at San Francisco who led another of the five studies that underwent a replication effort. Dr. Butte’s team used computer mapping to identify likely new drug benefits — such as using an ulcer drug to treat lung cancer — and then test in mice whether the theories actually work.

ADVERTISEMENT

The reproduction effort largely affirmed the study’s findings, Dr. Butte said. But the replicators then applied a different mathematical criteria for judging statistical significance, leading them to disagree on whether the overall project had proved its point. After a protest by Dr. Butte’s team, the final version of the replication study published in eLife was revised to accept some of the key points of the original study.

Dr. Butte declined to suggest that the reproducibility effort suffered from any motivation other than seeking the truth. But he expressed frustration with the possibility that reliable work could get tarnished. “Careers are on the line here,” he said.

Paul Basken covers university research and its intersection with government policy. He can be found on Twitter @pbasken, or reached by email at paul.basken@chronicle.com.

A version of this article appeared in the February 10, 2017, issue.
We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Tags
Scholarship & Research
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
Paul Basken Bio
About the Author
Paul Basken
Paul Basken was a government policy and science reporter with The Chronicle of Higher Education, where he won an annual National Press Club award for exclusives.
ADVERTISEMENT
ADVERTISEMENT

Related Content

Can Science’s Reproducibility Crisis Be Reproduced?
Research Watchdog: Brian Nosek
The Results of the Reproducibility Project Are In. They’re Not Good.
New Center Hopes to Clean Up Sloppy Science and Bogus Research
Journals Agree to Publish Alternative Assessments of Clinical-Trial Data

More News

Photo illustration showing Santa Ono seated, places small in the corner of a dark space
'Unrelentingly Sad'
Santa Ono Wanted a Presidency. He Became a Pariah.
Illustration of a rushing crowd carrying HSI letters
Seeking precedent
Funding for Hispanic-Serving Institutions Is Discriminatory and Unconstitutional, Lawsuit Argues
Photo-based illustration of scissors cutting through paper that is a photo of an idyllic liberal arts college campus on one side and money on the other
Finance
Small Colleges Are Banding Together Against a Higher Endowment Tax. This Is Why.
Pano Kanelos, founding president of the U. of Austin.
Q&A
One Year In, What Has ‘the Anti-Harvard’ University Accomplished?

From The Review

Photo- and type-based illustration depicting the acronym AAUP with the second A as the arrow of a compass and facing not north but southeast.
The Review | Essay
The Unraveling of the AAUP
By Matthew W. Finkin
Photo-based illustration of the Capitol building dome propped on a stick attached to a string, like a trap.
The Review | Opinion
Colleges Can’t Trust the Federal Government. What Now?
By Brian Rosenberg
Illustration of an unequal sign in black on a white background
The Review | Essay
What Is Replacing DEI? Racism.
By Richard Amesbury

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: a Global Leadership Perspective
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin