Don’t always believe what scientists tell you. Be skeptical.
That’s what I tell students in my history- of-science and science-writing classes. But some of them may have taken the lesson too much to heart.
I want to give my students the benefit of my hard-won knowledge of science’s fallibility. Early in my career, I was a conventional gee-whiz science writer, easily impressed by scientists’ claims. Fields such as physics, neuroscience, genetics, and artificial intelligence seemed to be bearing us toward a future in which bionic superhumans would zoom around the cosmos in warp-drive spaceships. Science was an “endless frontier,” as the physicist Vannevar Bush put it in his famous 1945 report that paved the way for creation of the National Science Foundation.
Doubt gradually undermined my faith. I came to believe that I and other journalists were presenting the public with an overly optimistic picture of science. By relentlessly touting scientific “advances"—from theories of cosmic creation and the origin of life to the latest treatments for depression and cancer—and by overlooking all the areas in which scientists were spinning their wheels, journalists made science seem more potent and fast-moving than it really is.
I urged my students to doubt the claims of physicists such as Stephen Hawking that they are on the verge of explaining the origin and structure of the cosmos. The theory that Hawking favors in his most recent bestseller, The Grand Design (Bantam Books, 2010), postulates the existence of particles shaped like strings or membranes, as well as other universes. But the hypothetical particles are too small and the other universes too distant to be detected by any conceivable experiment. This isn’t physics any more, I declared in class. It’s science fiction with mathematics!
I gave the same treatment to the quest for a theory of consciousness, which would explain how a three-pound lump of tissue—the brain—generates perceptions, thoughts, memories, emotions, and self-awareness. Artificial-intelligence authorities such as Ray Kurzweil assert that scientists will soon reverse-engineer the brain so thoroughly that they will be able to build artificial brains much more powerful than our own.
Balderdash, I told my classes (or words to that effect). Scientists have proposed countless theories about how the brain absorbs, stores, and processes information, but this plethora of explanations indicates that researchers really have no idea how the brain works. And artificial-intelligence advocates have been promising for decades that robots will soon be as smart as HAL or R2-D2. Why should we believe them now?
Maybe, just maybe, I suggested, fields such as particle physics, cosmology, and neuroscience are bumping up against insurmountable limits. The big discoveries that can be made have been made. Who decreed that science has to solve every problem?
Lest my students conclude that I’m some solitary crank—God forbid—I assigned them articles by other scientific skeptics. One, by the tough-guy science journalist Gary Taubes, was an 8,000-word dissection of epidemiology and clinical trials. Taubes pointed out that even large, well-designed investigations—notably the Nurses’ Health Study, in which Harvard researchers have tracked 120,000 women since the 1970s—often produce findings overturned by subsequent research.
Such studies have supported and then undermined claims about the medical benefits of estrogen therapy, fruits and vegetables, fish oil, and folic acid. Taubes advised people to doubt dramatic claims about the benefits of some new drug or diet, especially if the claim is new. “Assume that the first report of an association is incorrect or meaningless,” he wrote, because it probably is. “So be skeptical.”
To further drive this point home, I assigned articles by John Ioannidis, an epidemiologist who has exposed the flimsiness of most peer-reviewed research. In a 2005 journal article, Ioannidis examined 49 widely cited medical papers and found that for one-third, the results were shown by subsequent work to be wrong or exaggerated. After analyzing the track record of other fields, he concluded that “most published research findings are false.”
Ioannidis blames scientists’ lousy track record in part on what economists call “the winner’s curse.” The phenomenon occurs when bidders in an auction—whether art dealers lusting after a Picasso or oil companies pursuing drilling rights in Alaska—drive up the price of a commodity past its true value. The auction’s “winner” ends up being a loser. In the same way, scientific journals, which compete with one another to publish the most dramatic papers, often overestimate the trustworthiness of those studies.
As a result, Ioannidis and his colleagues contend, “the more extreme, spectacular results (the largest treatment effects, the strongest associations, or the most unusually novel and exciting biological stories) may be preferentially published.” These sorts of claims are also more likely to be wrong. Of course, scientists, who are competing for fame, glory, and publications exacerbate the problem by hyping their results.
To top off this ice-cream sundae of doubt, I offered my students a cherry: a critique by the psychologist Philip Tetlock of expertise in soft sciences, such as politics, history, and economics. In his 2005 book, Expert Political Judgment (Princeton University Press), Tetlock presented the results of his 20-year study of the ability of 284 “experts"—and I use the quotes advisedly—in politics and economics to make predictions about current affairs. The experts did worse than random guessing, or “dart-throwing monkeys,” as Tetlock put it.
Like Ioannidis, Tetlock found a correlation between the prominence of experts and their fallibility. The more wrong the experts were, the more visible they were in the media. The reason, he conjectured, is that experts who make dramatic claims are more likely to get air time on CNN or column inches in The Washington Post, even though they are likelier to be wrong. The winner’s curse strikes again.
For comic relief, I told my students about a maze study, cited by Tetlock, that pitted rats against Yale undergraduates. Sixty percent of the time, researchers placed food on the left side of a fork in the maze; otherwise the food was placed randomly. After figuring out that the food was more often on the left side of the fork, the rats turned left every time and so were right 60 percent of the time. Yale students, discerning illusory patterns of left-right placement, guessed right only 52 percent of the time. Yes, the rats beat the Yalies! The counterintuitive lesson, I suggested, is that the smarter you are, the more likely you may be to “discover” patterns in the world that aren’t actually there.
My goal was to foster skepticism in my students, and I succeeded—too much so in some cases. Early on, some reacted with healthy pushback, especially to my suggestion that the era of really big scientific discoveries might be over. “On a scale from toddler knowledge to ultimate enlightenment, man’s understanding of the universe could be anywhere,” wrote a student named Matt. “How can a person say with certainty that everything is known or close to being known if it is incomparable to anything?”
But as the semester unfolded, many students’ skepticism intensified, and manifested itself in ways that dismayed me. Cecelia, a biomedical-engineering major, wrote: “I am skeptical of the methods used to collect data on climate change, the analysis of this data, and the predictions made based on this data.” My lectures and assignments apparently were encouraging Cecelia and others to doubt human-induced global warming, even though I had assured them it has overwhelming empirical support.
Steve, a physics major, was so inspired by the notion that correlation does not equal causation—a major theme of the Taubes article on epidemiology—that he questioned the foundations of scientific reasoning. “How do we know there is a cause for anything?” Steve asked. He quoted “a famous philosopher, Hume, who believed that there is no cause of anything, but that everything in life is just a correlation.”
In a similar vein, some students echoed the claim of radical postmodernists that we can never really know anything for certain; that almost all our current theories will probably be overturned. Aristotle’s physics gave way to Newton’s, which in turn yielded to Einstein’s. Our current theories of physics will surely be replaced by radically different ones, won’t they? Who knows! Maybe even heliocentrism, which was established by astronomy pioneers like Copernicus and Kepler, will be shown to be wrong.
After an especially doubt-riddled crop of papers, I responded, “Whoa!” (or words to that effect). Science, I lectured sternly, has established many facts about reality beyond a reasonable doubt, embodied by quantum mechanics, general relativity, the theory of evolution, the genetic code. This scientific knowledge has yielded applications—from vaccines to computer chips—that have transformed our world in countless ways. It is precisely because science is such a powerful mode of knowledge, I said, that you must treat new pronouncements skeptically, carefully distinguishing the genuine from the spurious. But you shouldn’t be so skeptical that you deny the possibility of achieving any knowledge at all.
My students listened politely, but I could see the doubt in their eyes.
We professors have a duty to teach our students to be skeptical. But we also have to accept that, if we do our jobs well, their skepticism may turn on us.