The news that a well-known study on dishonesty was based on a lie is, as ironies go, almost too perfect. The study, published in 2012, purports to show that people are more likely to tell the truth on insurance forms when they pledge to be honest before filling them out. It’s a beautifully simple finding, and one with countless practical applications. It’s also, apparently, bunk.
Studies that once would have generated applause now don’t pass muster.
That revelation has put the high-profile research of Dan Ariely, a professor of psychology and behavioral economics at Duke University, under intense scrutiny. Ariely is known for cranking out clever experiments that dissect humanity’s foibles and felonies. He’s the best-selling author of books like Predictably Irrational: The Hidden Forces That Shape Our Decisions and The Honest Truth About Dishonesty: How We Lie to Everyone — Especially Ourselves. He’s an engaging speaker with a compelling personal story; his TED talks have racked up north of 20-million views. The 54-year-old is also a co-founder of several companies that make use of his research-based insights.
Ariely is a big deal in behavioral economics and beyond. Consequently the fact that one of his famous studies turns out to be nonsense is notable. But even more concerning is that, according to a stat-by-stat examination by the cold-eyed sleuths at Data Colada, the numbers in the study in question appear to have been fabricated. And, at least so far, Ariely’s explanation for what happened has been less than satisfying. The scientist has struggled to remember what year he received the data and in what form. It’s also surfaced that there were red flags early on but that Ariely reassured co-authors that everything was fine. As a result the bogus finding was published in a now widely cited paper, summarized in one of Ariely’s best sellers, and touted as an easy fix for the tendency to fib.
In the last decade or so, psychology in particular — though the problem extends to other disciplines as well — has been dogged by doubts about some of its flashiest findings. The blame rests primarily on an unfortunate mix of bad incentives, slipshod methods, and lack of transparency. Once upon a time, you could set up an experiment with a handful of subjects, run it a bunch of times until you got the result you wanted, and then publish it in a big-name journal. Reporters would cover it without raising an eyebrow. Literary agents would encourage you to flesh out your discoveries with anecdotes and advice. And readers then incorporated all that science-approved wisdom into their everyday lives.
Then came the replication crisis. Skeptical researchers started poking holes in many of those nifty findings. When the same experiments were attempted with a larger number of subjects, or with more rigorous controls, the gee-whiz conclusions often vanished. Now there’s much more discussion about the need to share data and to not paper over your experiment’s flaws and false starts. The replication crisis fueled the open-science movement, which doesn’t mean that plenty of lousy science doesn’t still slip past peer review — just read Retraction Watch if you want evidence of that — but it does mean that studies that once would have generated applause now don’t pass muster.
What also came to light was some flat-out fraud. Perhaps the most egregious example is the work of Diederik Stapel, the Dutch researcher who faked dozens of studies, making up data for experiments that were never performed, and became a celebrity researcher in the process. Stapel’s lying caught up with him but, boy, did it take a while. When I interviewed him in 2015, it was several years post-comeuppance and he was by then feeling chastened and reflective. “It’s about ambition, it’s about status, it’s about fitting in, it’s about trying to change the world,” Stapel told me at the time. Another prime example is Michael LaCour, whose ballyhooed same-sex marriage survey was pure fiction.
It would be unfair to assume that whatever happened with Ariely’s study is on the same level as LaCour or Stapel, who both brazenly deceived colleagues and reviewers. Ariely said in a statement to Data Colada that he didn’t suspect the data had problems and that he “did not test the data for irregularities, which after this painful lesson, I will start doing regularly.” He writes that the numbers were “collected, entered, merged, and anonymized by the company and then sent to me.” In the same statement he also said that he agreed with the Data Colada authors, who concluded that the data had been fabricated.
That would seem to shift the blame to the insurance company, which wasn’t named in the paper, but which Buzzfeed confirmed is the The Hartford, based in Connecticut. In a statement, the company said that there was a “small project with Dr. Ariely from 2007-2008, however we have been unable to locate any data, deliverables, or results that may have been produced.” According to Ariely’s book, The Honest Truth About Dishonesty, he spent a day meeting with the “top folks” at the company and threw out a bunch of ideas for collaborating on a research project. The Hartford’s lawyers shot them all down.
Ariely writes that his “contact person” at the insurance company got in touch later and said the professor could tinker with forms that are sent to customers so that they can record the odometer readings on their cars. “The company gave us twenty thousand forms, and we used them to test our sign-at-the-top versus sign-at-the-bottom idea,” he writes. It’s surprising that the company would have no record of delivering this significant chunk of data to a prominent Duke researcher. No emails? No confidentiality agreements? Nothing?
Presumably Ariely knows the name of the contact person who could help fill in some of the blanks now that the study’s credibility has been challenged. The statements put out by his co-authors suggest they are frustrated over how all this has played out. Max Bazerman, a professor of business administration at Harvard, writes that he was concerned about “implausible data” in the insurance-form study way back in 2011, and asked about it at the time. He writes that the co-author “responsible for this portion of the work” — which must be Ariely — “quickly showed me the data file on a laptop; I did not nor did I have others examine the data more carefully.”
Another co-author, Nina Mazar, a professor of marketing at Boston University, writes that she had “no inkling that the insurance field data was fabricated” until she read the Data Colada post. She goes on to write that she had no contact with the insurance company and doesn’t know “when, how, or by whom exactly the data was collected and entered.” According to the file’s metadata, Ariely created the Excel spreadsheet in 2011 — years after he would have met with employees at The Hartford, assuming the company’s dates are correct. (Ariely didn’t respond to an interview request and several of his co-authors declined to be interviewed.)
Here’s what we know so far: The paper, which was published by the Proceedings of the National Academy of Sciences, is going to be retracted. A spokesman for Duke confirmed that the university’s Office of Scientific Integrity is investigating, though there’s no requirement for that office to publicly reveal its findings. Whether Ariely himself will have more to say remains to be seen.
In one way, this is yet another depressing story about a headline-grabbing study that crumbles under closer examination. At the same time, if you’re looking for a silver lining, this is also yet another story about how researchers are getting better at ferreting out and exposing such deception. It’s a testament to the importance of replication: The fraud in this case would probably never have been exposed if researchers hadn’t attempted (and failed) to replicate the finding using more subjects. And it’s also further proof, as if more was needed, that scientists need to routinely share their data. It’s very unlikely that the problem would have gone undiscovered for a decade had Ariely been required to show the data to his co-authors and post it online alongside the study.
Ariely writes at the end of his book on dishonesty that all of us are “capable of cheating, and we are very adept at telling ourselves stories about why, in doing so, we are not dishonest or immoral.” Given that, science would be well-served if everyone trusted a little less and verified a bit more.