Skip to content
ADVERTISEMENT
Sign In
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
  • More
  • Sections
    • News
    • Advice
    • The Review
  • Topics
    • Data
    • Diversity, Equity, & Inclusion
    • Finance & Operations
    • International
    • Leadership & Governance
    • Teaching & Learning
    • Scholarship & Research
    • Student Success
    • Technology
    • Transitions
    • The Workplace
  • Magazine
    • Current Issue
    • Special Issues
    • Podcast: College Matters from The Chronicle
  • Newsletters
  • Virtual Events
  • Ask Chron
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Professional Development
    • Career Resources
    • Virtual Career Fair
    Upcoming Events:
    An AI-Driven Work Force
    University Transformation
Sign In
Research Integrity

Wanted: Scientific Errors. Cash Reward.

By Stephanie M. Lee February 21, 2024
Illustration showing data being examined under a magnifying class
Illustration by The Chronicle; iStock Images

Scientific-misconduct accusations are leading to retractions of high-profile papers, forcing reckonings within fields and ending professorships, even presidencies. But there’s no telling how widespread errors are in research: As it is, they’re largely brought to light by unpaid volunteers.

A program launching this month is hoping to shake up that incentive structure. Backed by 250,000 Swiss francs, or roughly $285,000, in funding from the University of Bern, in Switzerland, it will pay reviewers to root out mistakes in influential papers, beginning with a handful in psychology. The more errors found, and the more severe they are, the more the sleuths stand to make.

To continue reading for FREE, please sign in.

Sign In

Or subscribe now to read with unlimited access for as low as $10/month.

Don’t have an account? Sign up now.

A free account provides you access to a limited number of free articles each month, plus newsletters, job postings, salary data, and exclusive store discounts.

Sign Up

Scientific-misconduct accusations are leading to retractions of high-profile papers, forcing reckonings within fields and ending professorships, even presidencies. But there’s no telling how widespread errors are in research: As it is, they’re largely brought to light by unpaid volunteers.

A program launching this month is hoping to shake up that incentive structure. Backed by 250,000 Swiss francs, or roughly $285,000, in funding from the University of Bern, in Switzerland, it will pay reviewers to root out mistakes in influential papers, beginning with a handful in psychology. The more errors found, and the more severe they are, the more the sleuths stand to make.

The tech industry has long paid bounty hunters to unearth bugs in code, but the scientific enterprise has not had an equivalent — to its detriment, many say.

“When I build my research on top of something that’s erroneous and I don’t know about it, that’s a cost because my research is built on false assumptions,” said Malte Elson, a psychologist at the University of Bern who is leading the new program with Ruben C. Arslan, a postdoctoral researcher at the University of Leipzig, in Germany.

About 20 percent of genetics papers that contain Microsoft Excel lists of genes are thought to have errors introduced by the software, while an estimated one in four papers in general science journals have incorrect citations. Errors can be unintentional, but 2 percent of surveyed scientists admit to the more serious charges of fabricating or falsifying data. In just the last year, researchers at the Dana-Farber Cancer Institute, Harvard Medical School, Stanford University, and the University of Rochester, to name a few, have faced scrutiny over their work.

Peer reviewers for journals are primarily tasked with evaluating how original and important a finding is, not how accurate. So once a paper is out, mistakes tend to be surfaced by scientists scouring the literature on their own time — and at their own risk. The behavioral scientist Francesca Gino has filed a $25-million defamation lawsuit against a trio of professors who reported finding data fabrication in four of her papers, concerns that led to those papers’ retraction and Harvard Business School to put her on an unpaid administrative leave. (Gino has denied ever falsifying data.)

Over the next four years, the ERROR program — short for Estimating the Reliability and Robustness of Research — will aim to pay experts to scrutinize 100 widely cited papers that fit their technical or subject expertise. Psychology will be first up, but the organizers hope to branch out to other subjects, like economics, political science, and medicine.

Errors can take all forms, whether differences between how experiments were done versus reported, or discrepancies between analyses and conclusions. Some errors could be clear miscalculations, and others more subjective and context-dependent, the organizers acknowledge, so reviewers will be allowed to determine how to look for them. They’ll also be allowed to ask the authors for help in fact-checking. Each will generate a report of any errors found, which will eventually be posted publicly.

An ERROR staffer overseeing the process, known as the “recommender,” will review the report before it is sent to the authors, who can respond. The recommender will then write a summary of the alleged concerns and suggest a course of action, which could include correcting or retracting articles with major errors.

ADVERTISEMENT

A crucial caveat: A paper will be reviewed only if its authors agree. That’s because without full access to the underlying data, code, and other materials, there will always be questions the reviewer cannot answer, Elson said. “At this point, many people will be skeptical, and they will maybe rightfully think they can only lose if they say yes to this — all they do is put their paper at risk,” he said.

On the other hand, the prospect of a reputational boost may attract participants. “People can then point to the error report that will be public and say, ‘They checked my work and it’s fine,’” Elson said.

Research That Pays Off

Cold, hard cash is another incentive. Participating authors will get a small fee of about 250 francs, or roughly $285, plus more if no errors are found (or if they’re minor). Reviewers can make up to the equivalent of $1,135, and more depending on what they find. If their work results in a recommended retraction, they’ll net an additional $2,835.

ERROR will start with three papers, including a 2020 article that identified a strategy to discourage online sharing of Covid-19 misinformation. Gordon Pennycook, the lead author and an associate professor of psychology at Cornell University, said he was happy to have it selected. Having started his Ph.D. in 2011, Pennycook trained during an era in which a “replication crisis” unraveled some of his field’s buzziest findings and highlighted the importance of reproducible scientific practices.

ADVERTISEMENT

“If someone’s replicating your work, they’re basically putting in their own work, adding data and information to something you obviously care about,” he said. “You should actually be excited someone’s going to replicate it.”

Not every scientist approached will be as willing and open-minded. So far, Elson said, the authors of two nominated papers have turned down invitations, and two others are undecided.

The ERROR website acknowledges that the dynamic between reviewers and authors may get “adversarial.” But it insists that the process should ultimately be “a collaborative one in the service of improving our collective scientific knowledge and fostering a culture of error checking and error acceptance.”

Elisabeth Bik, a scientific-integrity consultant who specializes in detecting manipulated images, said she welcomed what ERROR was trying to do. “There are enormous amounts of money funding novel research, and almost nothing going towards reproducibility or quality control,” said Bik, who is not involved with the program, by email.

ADVERTISEMENT

At the same time, she can see potential problems with the setup — for instance, if ERROR reviewers were direct competitors of the researchers whose work they’re critiquing.

Elson said that if authors have a reason to believe a reviewer can’t be impartial, they can raise that concern to the organizers throughout the process. In addition, while the reviewer may find problems, the recommender decides on their severity, which is what determines the scope of the payout. “We will take the utmost care to monitor this,” he said by email.

Lawsuits are another possible concern, noted Bik, who has faced legal threats over her own sleuthing in the past. Elson said ERROR has no legal insurance, citing the complications of insuring an international project. All participants are doing this at their own risk, he said.

ERROR is, in other words, a big science experiment. Can it make error-finding less stigmatized, more standard? Will it inspire the rest of the enterprise to take a look under the hood?

“My goal,” Elson said, “is to take funding organizations, like the National Science Foundation in Switzerland, and tell them, ‘Look, if you take a tiny portion of your funding and pour this into it and do a random review of the projects you funded, that would go a long way.’”

A version of this article appeared in the March 15, 2024, issue.
We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Correction (Feb. 23, 2024, 12:28 p.m.): This article originally misstated the extent of errors thought to exist in some genetics papers. About 20 percent of genetics papers with Microsoft Excel lists of genes, not 20 percent of all genetics papers, are thought to have errors introduced by the software.
Tags
Scholarship & Research
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
Stephanie-Lee.png
About the Author
Stephanie M. Lee
Stephanie M. Lee is a senior writer at The Chronicle covering research and society. Follow her on Bluesky at @stephaniemlee.bsky.social, message her on Signal at @stephaniemlee.07, or email her at stephanie.lee@chronicle.com.
ADVERTISEMENT
ADVERTISEMENT

More News

Black and white photo of the Morrill Hall building on the University of Minnesota campus with red covering one side.
Finance & operations
U. of Minnesota Tries to Soften the Blow of Tuition Hikes, Budget Cuts With Faculty Benefits
Photo illustration showing a figurine of a football player with a large price tag on it.
Athletics
Loans, Fees, and TV Money: Where Colleges Are Finding the Funds to Pay Athletes
Photo illustration of a donation jar turned on it's side, with coins spilling out.
Access & Affordability
Congressional Republicans Want to End Grad PLUS Loans. How Might It Affect Your Campus?
Florida Commissioner of Education Manny Diaz, Jr. delivers remarks during the State Board of Education meeting at Winter Park High School, Wednesday, March 27, 2024.
Executive Privilege
In Florida, University Presidents’ Pay Goes Up. Is Politics to Blame?

From The Review

Photo-based illustration of a tentacle holding a microscope
The Review | Essay
In Defense of ‘Silly’ Science
By Carly Anne York
Illustration showing a graduate's hand holding a college diploma and another hand but a vote into a ballot box
The Review | Essay
Civics Education Is Back. It Shouldn’t Belong to Conservatives.
By Timothy Messer-Kruse
Photo-based illustration of a hedges shaped like dollar signs in various degrees of having been over-trimmed by a shadowed Donald Trump figure carrying hedge trimmers.
The Review | Essay
What Will Be Left of Higher Ed in Four Years?
By Brendan Cantwell

Upcoming Events

Plain_Acuity_DurableSkills_VF.png
Why Employers Value ‘Durable’ Skills
Warwick_Leadership_Javi.png
University Transformation: A Global Leadership Perspective
Lead With Insight
  • Explore Content
    • Latest News
    • Newsletters
    • Letters
    • Free Reports and Guides
    • Professional Development
    • Virtual Events
    • Chronicle Store
    • Chronicle Intelligence
    • Jobs in Higher Education
    • Post a Job
  • Know The Chronicle
    • About Us
    • Vision, Mission, Values
    • DEI at The Chronicle
    • Write for Us
    • Work at The Chronicle
    • Our Reporting Process
    • Advertise With Us
    • Brand Studio
    • Accessibility Statement
  • Account and Access
    • Manage Your Account
    • Manage Newsletters
    • Individual Subscriptions
    • Group and Institutional Access
    • Subscription & Account FAQ
  • Get Support
    • Contact Us
    • Reprints & Permissions
    • User Agreement
    • Terms and Conditions
    • Privacy Policy
    • California Privacy Policy
    • Do Not Sell My Personal Information
1255 23rd Street, N.W. Washington, D.C. 20037
© 2025 The Chronicle of Higher Education
The Chronicle of Higher Education is academe’s most trusted resource for independent journalism, career development, and forward-looking intelligence. Our readers lead, teach, learn, and innovate with insights from The Chronicle.
Follow Us
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin