In recent reporting in the Chronicle, Stephanie M. Lee describes how “a famous study about a clever way to prompt honest behavior was retracted due to an ironic revelation: It relied on fraudulent data.” The author of the retracted study also wrote a book titled, appropriately, Rebel Talent: Why It Pays to Break the Rules in Work and in Life.
Examples of this particular irony are more numerous than might be expected. The disgraced primatologist Marc Hauser wrote a book called Evilicious: Why We Evolved a Taste for Being Bad
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
In recent reporting in TheChronicle, Stephanie M. Lee describes how “a famous study about a clever way to prompt honest behavior was retracted due to an ironic revelation: It relied on fraudulent data.” An author of the retracted study also wrote a book titled, appropriately, Rebel Talent: Why It Pays to Break the Rules in Work and in Life.
Examples of this particular irony are more numerous than might be expected. The disgraced primatologist Marc Hauser wrote a book originally calledEvilicious: Why We Evolved a Taste for Being Bad. The psychologist Dan Ariely, who was forced to retract an article containing faked data, and who has promoted a company making fishy claims about insurance algorithms, wrote a book called The (Honest) Truth About Dishonesty: How We Lie to Everyone — Especially Ourselves. He even participated in a radio show called Everybody Lies, and That’s Not Always A Bad Thing, in which he gave this amazing-in-retrospect quote to the ever-credulous hosts at National Public Radio: “What separates honest people from not-honest people is not necessarily character, it’s opportunity. ... The surprising thing for a rational economist would be: why don’t we cheat more?”
What’s going on?
I have a few theories. The first is that these cheating researchers are both cheaters and researchers. That is, they are willing and able to break the rules and misrepresent the facts for their personal benefit, and they are researchers who are genuinely interested in the subject of cheating.
ADVERTISEMENT
If you are a researcher in psychology or a related field, it makes sense that you might be particularly interested in phenomena that involve you personally. Fair enough: I’m interested in politics, so I study political science. These people are susceptible to dishonesty, so they study it. Perhaps the reason that so many prominent perpetrators of scientific misconduct have been so brazen about it that their writings can almost be seen as confessions is simply that they’re so interested in the topic they just can’t stop writing about it.
Another factor is that scientific misconduct is often rewarded. Until their eventual exposure, the producers of this controversial research were riding high. Their publication tactics had succeeded for years, so they had every reason to believe they could keep doing their thing, brushing aside any objections. Lots of people in authority don’t care, or don’t want to know. Once you’ve been doing it for a while and nobody has called you on it, you might feel yourself invincible.
Once you’ve been doing it for a while and nobody has called you on it, you might feel yourself invincible.
The other thing, and this is speculation too, is that maybe the kind of people who will cheat in this way don’t have the same moral sense as the rest of us. They think everyone cheats, and if you don’t, you’re a fool. If you’re a cheater and you regularly lie to your friends and collaborators, and you write books about how it pays to break the rules, then maybe you think that normies are saps, the academic equivalent of tourists walking around in Times Square in Bermuda shorts with wallets hanging out of their back pockets.
My impression, ultimately, is that these people just don’t understand science very well. They think their theories are true and they think the point of doing an experiment (or, in some cases, writing up an experiment that never happened) is to add support for something they already believe. Falsifying data doesn’t feel like cheating to them, because to them the whole data thing is just a technicality. On the one hand, they know that the rules say not to falsify data. On the other hand, they think that everybody does it. It’s a tangled mess, and the apparent confessions in these book titles do seem to be part of the story.
It’s certainly not a great sign that so many cheaters have attained such high positions and reaped such prestigious awards. It does make you wonder if some of the subfields that celebrate this bad work suffer from systematic problems. A lot of these papers make extreme claims that, even if not the product of fraud, ought to cause more leaders in these fields to be a bit skeptical.
ADVERTISEMENT
Then there’s the problem of “passive corruption” — not the people who directly cheat, but those who know about cheating but don’t do anything about it. I suspect this is the result of some mixture of the following motivations: Scholars don’t want to waste time or attention on bad work; they fear the social or professional consequences of confronting cheaters; they are concerned that a general air of skepticism will spread to their own research.
How should we account for the trust extended to these researchers’ collaborators? As Lee and Nell Gluckman write, “The revelations have shaken and saddened the behavioral-science community. … And some are looking with suspicion at the dishonesty researcher they once knew and trusted, a deeply disorienting sensation.”
I have a problem with this narrative offered by the behavioral scientists — in which they were the unsuspecting victims of unexpected episodes of academic dishonesty. I, too, have been involved in collaborations where I’ve never looked at the raw data and wasn’t involved in the data collection. It really is all about trust. And anyone can get conned by someone who is willing to lie. But this particular group of the deceived were themselves students of cheating. They were collaborating with a researcher who was writing books and giving speeches on how everyone’s a cheater. So why would they, of all people, be in the habit of trusting blindly? It’s almost as if they didn’t believe their own research! As the tech people say, they weren’t eating their own dogfood.
Second, this has happened before. And many of those past cheaters enjoyed tons of institutional support. Marc Hauser finally got kicked out of Harvard, but that didn’t stop the superstar academic linguist Noam Chomsky from continuing to defend him. Brian Wansink was forced to retire from Cornell, but it took a while, and, before that happened, the cheater was defended by the tone police. When the problems with Matthew Walker’s sleep research came up, the University of California at Berkeley didn’t care.
Here’s a pungent way of thinking about it. Cheating in science is like if someone poops on the carpet when nobody’s looking. When some other people smell the poop and point out the problem, the owners of the carpet insist that nothing has happened at all and refuse to allow anyone to come and clean up the mess. Sometimes they start shouting at the people who smelled the poop and call them “terrorists” or “thugs.” Meanwhile, other scientists walk gingerly around that portion of the carpet; they smell something, but they don’t want to look at it too closely.
ADVERTISEMENT
A lot of business and politics is like this too. But we expect this sort of thing to happen in business and politics. Science is supposed to be different.
As a statistician and political scientist, I would not claim that my fields show any moral superiority to psychology and experimental economics. It just happens to be easier to make up data in experimental behavioral science. Statistics is more about methods and theory, both of which are inherently replicable — if nobody else can do it, it’s not a method! — and political science mostly uses data that are more public, so typically harder to fake.
Anyway, here’s my point. These people were writing papers and books about cheating. They had cheaters in their midst, and they still have cheaters in their midst. And that’s not even to mention all the bad research where there’s no data fabrication or outright lying, just the production of useless, unreplicable research. It’s common knowledge in the behavioral-science community that there’s tons of crap out there which will never be retracted.
This is related to the “research-incumbency rule,” which states that, once a story is told, the burden of proof is on other people to disprove it. So, if a researcher manages to publish a ridiculous claim, there are steep barriers to challenging the claim, let alone arguing that there might be fraud. It’s not that it’s necessarily impossible to make the case that published work is wrong — indeed, the scholars Uri Simonsohn, Joe Simmons, and Leif Nelson demonstrated the problems with fake data in the dishonesty studies — but there’s a high burden of proof. You have to come in with really strong evidence, much stronger than the evidence for the original claims. Beyond this, there can be social or professional consequences of confronting cheaters or those whose research proves unreplicable.
I’m not saying that most or even many behavioral researchers are liars, cheaters, or frauds, or that they are happy with research that does not replicate. The problem is that theirs is an academic community that has consistently looked away from or downplayed lying, cheating, fraud, and weak research. For example, the first edition of Cass Sunstein and Richard Thaler’s influential book Nudge referred (unironically) to “another Wansink masterpiece.” Then, after that work was discredited, the reference to it was removed from the book’s second edition. Removing work that’s known to be fatally flawed — that’s good. But by removing any mention of it, they memory-holed their earlier cheerleading for work that turned out to be fraudulent. They haven’t exactly rewritten history, but they’ve framed things as if the problem had never existed, thus losing an opportunity to confront the error. By looking away from the problem, they have set themselves up for more problems in the future, as we all do if we politely overlook nonreplicable findings, incoherent analyses, and disappearing data.
ADVERTISEMENT
This essay is adapted from several of the author’s blog posts.