Vladimir Putin is waging yet another influence campaign. New intelligence shows that Russian actors are using fake media websites and social-media accounts, some enhanced by artificial intelligence, to shape the November election by targeting voters in swing states. And Russia is not alone: China and Iran are also spreading falsehoods online to sway votes.
While the public is rightly worried about disinformation, disturbing examples of misinformation also abound — including conspiracy theories on both sides of the aisle. Though disinformation and misinformation are often conflated, they are analytically different, and the difference matters. Disinformation refers to the purposeful use of false information to deliberately deceive others. Misinformation is false or misleading information spread without specific intent to deceive. Those spreading misinformation may mistakenly believe it to be the truth, or simply may not care one way or the other. Early in the COVID-19 pandemic, for example, well-meaning people spread incorrect information, including conspiracy theories that contributed to vaccine hesitancy, with potentially dire public-health consequences.
Given these dangers, policymakers and researchers have taken to the floor of Congress and academic journals, calling on technology companies to increase transparency and allow for a better response to online misinformation. In January of 2024, the World Economic Forum’s survey of almost 1,500 global experts identified misinformation and disinformation as the greatest global risk in the next two years.
Though disinformation and misinformation are often conflated, they are analytically different, and the difference matters.
While the term “misinformation” may seem simple and self-evident at first glance, it is in fact highly ambiguous as an object of scientific study. It combines judgments about the factuality of claims with arguments about the intent behind their distribution, and inevitably leads scholars into hopelessly subjective territory. As a result, the pursuit of misinformation as a research topic ultimately makes it harder to answer critical questions about how changes to the information environment impact outcomes in the real world. It would be better, we think, for researchers to abandon the term as a way to define their field of study and focus instead on specific processes in the information environment.
When scholars talk about “misinformation,” they are talking about activity generated by three very different social phenomena, each of which interacts with technology in distinct ways and creates a different kind of evidence trail.
The first is state-sponsored influence campaigns: countries reaching across borders to influence other countries’ politics with a combination of propaganda, true information presented out of context, and outright lies. This is nothing new. “Black propaganda,” long employed by Russia with remarkable creativity, is just one of a variety of information operations that have long been part of statecraft.
The second process is when bad actors, whether state-aligned or not, use conspiracy theories or false information to make political arguments in their own countries. The literature on this process has proliferated in recent years, and with good reason. Lies spread online have led to physical assaults on the peaceful transition of power in two consolidated democracies — the United States and Brazil — and have been used to gain power in Gabon, India, Slovakia, the United Kingdom, and many other countries. Technology has turbocharged this process, allowing rumors to spread far more quickly than before. Here again, though, the actors involved almost always use a mix of true and false information.
The third process is monetized falsehoods, often concerning medical information and contentious political topics. In recent years, technology has created innumerable new platforms outside traditional media where con artists can make money off attention. A lot of money, it turns out. The purpose of such campaigns is less to falsely influence people’s beliefs — though this is certainly one of their consequences — than it is to grab their eyeballs with sensationalized stories: The practice is more akin to tabloid journalism than state propaganda. Feeding provocative and inaccurate political stories to people in Western democracies was so profitable in the late 2010s that entrepreneurs in Kosovo and Macedonia turned their small towns into mills for putting stories on Facebook and other platforms.
These three processes were conflated under the term “misinformation” during the COVID-19 pandemic. Concerns in the public-policy community coalesced around “misinformation” — a word that had previously been used mostly by scholars thinking about election interference — in a big way during COVID. All of a sudden, “misinformation,” however ill-defined, was understood as standing in the way of the global public-health response. The United Nations and World Health Organization declared an “infodemic,” and researchers scrambled to diagnose and treat it.
Unfortunately, the medicalization of misinformation research led to scholarly agendas so broad and ambitious as to be impossible. The American Psychological Association’s 2023 Consensus Statement reads: “To fully grasp the impact of health misinformation in our time, it is necessary to understand the psychological factors that drive it in general: the qualities that make us likely to believe and share it, the levers of manipulation used by its creators, and the network effects induced by today’s media and political landscape.” This language recalls a faith in scientific determinism that is at odds with the fact that even physical phenomena are sometimes impossible to predict, let alone the complex interactions between human minds and networked technological systems.
Beyond these thorny difficulties, scholarly examination of “misinformation” per se creates two further challenges. First, most of the actors causing harms in the information environment produce a mix of true and false information, as when conspiracy theorists use real technical glitches with voting machines to undermine confidence in elections, or when anti-vaccine influencers cite true, but rare, vaccine-induced heart injuries. Thus defining the field of study based on the factuality of content makes holistic assessments of actors and their incentives challenging.
The medicalization of misinformation research led to scholarly agendas so broad and ambitious as to be impossible.
Second, it puts the researcher in the position of adjudicating the truth value of content. This is not the role researchers should play, for a few reasons. Once researchers try to determine what counts as “false and inflammatory content” and what doesn’t, it becomes hard to maintain an appropriate level of scholarly objectivity. And not only are researchers, like all human beings, susceptible to ideological bias: If every potential untruth has to be subject to its own analytical process, there is simply too much work to be done. And even on a purely factual level, we’re simply too often incorrect. From bloodletting to ease fevers, to measuring the skull to understand a person’s mental faculties, to washing hands and disinfecting surfaces to prevent COVID, the history of scientific knowledge is replete with errors by those attempting to promote the truth and contain falsehoods.
Instead of wading into muddy descriptions of misinformation, the global research-policy community should invest in building the detection and measurement systems we need to observe how the information environment is changing and how that impacts real-world outcomes.
Such an investment is long overdue. Consider a parallel in the natural sciences: In advanced societies, because economic activities often create pollution that causes harm to people, we weigh economic growth against environmental damage and use policy to balance costs and benefits. We can do so because we have invested hundreds of millions of dollars every year in measuring economic activity and environmental conditions, including pollution.
The global research-policy community should invest in building the detection and measurement systems we need to observe how the information environment is changing.
Clearly, there are vast, population-level harms originating in the information environment today — many of which, such as mental-health issues, do not necessarily involve false information. But we don’t have the necessary measurements in place to even understand the costs and benefits, much less formulate a good policy response. We suspect, for example, that the sharing of TikTok videos that reference Osama Bin Laden’s “Letter to America” might shift the political opinions of the platform’s users, but we don’t know. We suspect that the profusion of images of “ideal” bodies on Instagram might lead to depression and anorexia among teenagers, but we don’t know. And the data we need to be able to rigorously study such questions is increasingly costly for researchers to obtain.
At the other end of the causal chain, we have very little information on how platform policies or regulatory changes impact the information environment. We have reliable evidence that Germany’s Network Enforcement Act regulation has reduced hate speech online, but that is a rare exception. As a result, our policy conversation is dominated by dueling anecdotes. And big changes, such as YouTube’s decision last week to change how it recommends fitness videos to teens, are made without the kinds of evidence we look for in other policy areas.
To do better, we need to make researching the information environment radically easier. We should have standard collections that track the historical evolution of activity on multiple platforms over time, just as we do with economic activity. This is not something commercial tools or even platforms’ internal data can provide. Because as content-moderation standards evolve — e.g., a company decides to remove a given actor or modifies its algorithm for identifying dehumanizing images — platforms apply them retrospectively and require their commercial partners to do the same. And that means the past as we see it today in these tools is not the past as it was; it is the past subject to today’s standards, which is crippling for research on sensitive topics. For example, the mass removal of anti-vaccine and election-denial content from March 2020 to mid-2021 makes it very difficult to study how such content contributed to COVID mortality or the January 6 insurrection, because it has been effectively removed from the historical record.
Beyond standardized data, we need shared investments to overcome technical roadblocks that make it so hard for scientists from diverse fields to enter this space. Environmental science is so powerful, in part, because deep investments in shared measurement infrastructure and data processing enable experts in disciplines ranging from geology, chemistry, and oceanography to public policy and law to flexibly study and address a wide range of environmental challenges. NASA’s Earth Science Data Systems Program, for example, provides ready access to data at multiple levels of detail from dozens of sources, both historical and current, enabling studies on everything from fisheries management to deforestation to how agricultural practices shape fertilizer runoff.
We can create systems that would allow researchers to study the full breadth of the information environment and build shared understanding across disciplines. Commercial social-listening tools, which resell information from social-media platforms to businesses, have proven the core functionality is possible. What is lacking is serious government investment to support tools with the features we need to observe the patterns and pollutants, if you will, in the information environment. Such tools would allow us to push beyond the phantom subject of “misinformation” toward a more reliable scientific understanding of how the information environment is shaping society today and how we can build a healthier one in the future.