It only seems as if Brian Nosek is trying to destroy psychology.
Really, he’s trying to save it. Mr. Nosek, who is 43 and executive director of the Center for Open Science, is reckoning with deep problems not just in his discipline — he’s a professor of psychology at the University of Virginia — but in science generally. While the problems can sound arcane, with talk of p-hacking and the file-drawer effect, the upshot is simple: Too much of what gets published in respected, peer-reviewed journals cannot be replicated. And when a finding can’t be replicated, that casts doubt on its reliability. And if what gets published in scientific journals is unreliable, then science is in trouble.
We’re sorry, something went wrong.
We are unable to fully display the content of this page.
This is most likely due to a content blocker on your computer or network.
Please allow access to our site and then refresh this page.
You may then be asked to log in, create an account (if you don't already have one),
or subscribe.
If you continue to experience issues, please contact us at 202-466-1032 or help@chronicle.com.
It only seems as if Brian Nosek is trying to destroy psychology.
Really, he’s trying to save it. Mr. Nosek, who is 43 and executive director of the Center for Open Science, is reckoning with deep problems not just in his discipline — he’s a professor of psychology at the University of Virginia — but in science generally. While the problems can sound arcane, with talk of p-hacking and the file-drawer effect, the upshot is simple: Too much of what gets published in respected, peer-reviewed journals cannot be replicated. And when a finding can’t be replicated, that casts doubt on its reliability. And if what gets published in scientific journals is unreliable, then science is in trouble.
Before Mr. Nosek came along, no one knew the extent of that trouble. Oh, sure, there were signs — a failed replication here, a retracted paper there — but it felt scattershot. What was needed was a close examination of the system, something rigorous and hard to dismiss. In late 2011, Mr. Nosek started the Reproducibility Project, which aimed to replicate findings in 100 studies from three top psychology journals.
He forced a reckoning with deep problems in science.
It was an enormous undertaking, bringing together more than 270 researchers. It wasn’t easy: There were administrative headaches, and the project took considerably longer than originally predicted. But somehow they pulled it off.
The results? Only 39 percent of the studies could be replicated.
Terrible news, for sure. Talk to psychologists now, after the numbers have had time to sink in, and it’s clear that many of them still don’t know what to think. The project didn’t discover that a few flawed studies managed to somehow slip past the peer-review filter. Instead it showed that findings that can be replicated are the exception. Consequently, it’s hard to look at the results of an exciting new study these days without raising an eyebrow.
ADVERTISEMENT
Mr. Nosek’s own reaction to this dire verdict has been measured. “I would have loved for the reproducibility rate to have been higher,” he said when the project’s findings were released. That’s a diplomatic way of putting it.
He’s left it for others to speculate on which subdisciplines may have to be scrapped entirely, or whether entire high-profile careers are built on statistically shaky foundations. Mr. Nosek is trying to reform science without entirely alienating fellow scientists.
And he’s not finished. In a recently published paper, Mr. Nosek and his colleagues conducted an experiment to see if it’s possible to sniff out dubious studies in advance. They gave a few dozen researchers $100 each and asked them to bet on whether certain studies in the Reproducibility Project would be successfully replicated (obviously, they placed their bets before the project’s findings were known). It turns out that they bet correctly most of the time. In other words, scientists kind of already know which studies are bunk. That should make it easier to decide which studies should receive extra scrutiny.
It’s a sign that maybe there’s some hope amid all the discouragement. Or, as Mr. Nosek put it recently: “Wow, we could really do better.”
Tom Bartlett is a senior writer who covers science and other things. Follow him on Twitter @tebartl.