The spread of misinformation poses a serious threat to science, public health, and democracies worldwide. In a recent essay in these pages, Jacob Shapiro and Sean Norton highlight the risks from disinformation (which they define as “the purposeful use of false information to deliberately deceive others”), but go on to argue that scholars should stop studying misinformation (“false or misleading information spread without specific intent to deceive”). They argue that the term misinformation “combines judgments about the factuality of claims with arguments about the intent behind their distribution, and inevitably leads scholars into hopelessly subjective territory.” Shapiro and Norton suggest researchers “abandon the term as a way to define their field of study.” We outline here why this position, advanced in the name of scientific rigor, is itself unscientific and will likely, if adopted, lead to societal harm.
Scholars who have studied misinformation — since well before the pandemic, which Shapiro and Norton claim popularized the term — have long distinguished between misinformation and disinformation. This distinction is well-documented and important because, where intent can be established, there are potential legal ramifications. For example, it is firmly established that the tobacco industry intentionally misled the American public for over 50 years on the health harms of smoking. In 2006, District Judge Gladys Kessler issued a landmark decision finding that the tobacco industry had violated the Racketeer Influenced and Corrupt Organizations (RICO) act and defrauded the public by lying about the health effects of smoking and about the addictiveness of cigarettes and nicotine. The judge found that “Defendants have marketed and sold their lethal product with zeal, with deception… and without regard for the human tragedy or social costs.”
The World Health Organization estimates that 100 million people died in the 20th century because of tobacco-caused disease. That figure exceeds the number of deaths in both World Wars combined. Judges, juries, public-health officials and historians have all concluded that an identifiable causal factor in these deaths was the deliberate spread of disinformation by the tobacco industry, whose executives understood the link between smoking and lung cancer as early as the 1950s but for decades denied it using a variety of propagandistic means in order to protect their profits.
This was disinformation, plain and simple. But misinformation is an important part of this story, too: Misinformation arose downstream of the initial propaganda, after many people — including scientists, physicians, and journalists — became confused about the harms of smoking and spread their confusion inadvertently to others. If we refuse to study the effects of this downstream misinformation, because it can sometimes be hard to pin down, and restrict ourselves only to the most flagrant deceptions, we miss the opportunity to understand the bigger picture and protect people from harm. What starts out as disinformation often becomes misinformation at the next remove. Halting the study of misinformation prematurely could lead to unmitigated harm by failing to identify, measure, and counter false information at scale.
There is broad expert consensus that misinformation can be defined to include both “false” and “misleading” information. An important part of misinformation research is to show how claims that are literally true can still be deeply misleading and harmful. Reporting that a “‘healthy’ doctor” died two weeks after receiving a Covid-19 vaccine may be factual, but it also implies the falsehood that the vaccine was responsible for his death.
If we refuse to study the effects of misinformation because it can sometimes be hard to pin down, we miss the opportunity to understand the bigger picture and protect people from harm.
The fact that there are cases in which misinformation cannot be readily and unambiguously differentiated from true information should not compel us to stop studying the problem. To the contrary, if misinformation is sometimes difficult to identify, then we need to redouble efforts to find reliable means to improve our detection task. (We acknowledge and applaud Shapiro’s earlier Microsoft-sponsored project about the digital fingerprints of misinformation as part of this effort.)
Shapiro and Norton claim that research on misinformation requires adjudication of the truth value of content, which, according to them, is difficult if not impossible because of the ideological bias of researchers and because “even on a purely factual level, we’re simply too often incorrect.” Sure, researchers sometimes get things wrong. But they also get many things right, and the raison d’être of scientific research is not to be infallible but to move in the direction of truth as indicated by the evidence. All research involves expert judgment; that is just a fact of scientific life. Most important problems are hard, but this is no reason to avoid them, and the fact that the real world is often messy and ambiguous is no reason not to strive for whatever clarity we can achieve. There are circumstances in which dangerous sharks may be confused with benign Mekong giant catfish, but that does not — and should not — deter ichthyologists from differentiating between taxa.
As for ideological bias: In fact, research shows that independent fact-checks of false claims highly correlate with one another, as well as with ratings of bipartisan crowds. Of course not every claim can be fact-checked, but that’s why misinformation interventions seek to empower people to spot the manipulation tactics that underpin misinformation in general rather than focusing on verifying or disproving specific empirical claims.
Moreover, using heuristic categories (“misinformation,” “disinformation,” “infodemic,” and so on) is an essential part of producing testable research hypotheses, which allow for increased precision. For example, infodemiology has produced highly precise epidemiological models and mathematical formalizations of misinformation diffusion that coincide with real-world observations from social media. Research — including our own — has shown that misinformation can be coherently defined, measured in terms of discernible and diagnostic empirical features (such as emotional manipulation or use of logical fallacies), and countered using evidence-based interventions. For example, corrective ads about tobacco misinformation have shown to increase intentions to quit smoking. Preemptively refuting misinformation — an approach known as prebunking — was used by the U.S. government to take the wind out of Putin’s planned invasion of Ukraine and subsequently delayed the attack. In short, harm can be prevented by identifying, measuring, and countering misinformation.
We applaud Shapiro and Norton’s call for greater support for standardized detection and measurement systems that will allow scientists to get a better overview of the entire information environment. But this proposal can only be achieved by augmenting, rather than diminishing, the current study of misinformation, so that better detection systems can live up to their promise.
The entire history of science can be understood as the attempt to sort false claims from true ones, based on evidence. That has always involved a high degree of attention to classification of topics and categories of analysis. Scholars concerned about the adverse impacts of dis- and misinformation are simply doing what scientists have always done and need to continue to do. Misinformation is real, alas, and it needs to be studied alongside all the other real things in the world.