Most of us have been taught to think of scientific bias as a distortion of scientific results. As long as we avoid misinformation, fake news, and false conclusions, the thinking goes, the science is unbiased. But the deeper problem of bias involves the questions science pursues in the first place. Scientific questions are infinite, but the resources required to test them — time, effort, money, talent — are decidedly finite. This selection stage is where the battle over whether science will serve the public good or private profit is won and lost. So long as researchers fail to set their agendas based on evidence about how their research fits into larger social and political dynamics, corporations will continue to do it for them.
Corporations have a long history of influencing the production and dissemination of scientific knowledge — often without clearly biasing the results of the studies. In the 1970s and 1980s, R.J. Reynolds, a major tobacco company, funded research at elite institutions like Harvard and Yale Universities to determine the causes of cancer and other degenerative diseases. The studies examined the impact of a wide array of factors, including environmental toxins, genetics, stress, and personality type. R.J. Reynolds invested more than $45 million in this research.
If you were a scientist on the receiving end of this lavish funding, you might be forgiven for seeing it as an unmitigated good. After all, it would accelerate scientific discovery, and could be used to save lives. But by diverting research onto alternative causes of cancer, the tobacco industry was slyly reorienting the scientific debate away from the threats of smoking. This act of corporate prestidigitation distracted scientists from the most effective health interventions and allowed a company peddling a deadly product to skirt public accountability.
Once you know to look for it, this kind of bias is hiding everywhere in plain sight. Take, for instance, the recent infatuation in the social sciences with “nudges.” These are interventions that aim to change behavior with a light touch (e.g., a text message), often deployed with weighty public-policy ends in mind, such as reducing waste or debt. Nudges became popular because they are simple and cheap. They have also received tremendous support from corporations and other institutions, in terms of funding, strategic programs, and public-relations campaigns, which have further increased the research devoted to exploring these interventions.
Unfortunately, they are not terribly effective. More social scientists are beginning to wonder exactly how productive this line of work has been, and whose interests it has really served. A new paper by behavioral scientists Nick Chater and George Loewenstein argues that this focus on individual-level interventions has distracted from more-serious study of systemic change. A framing focused on small, individual-level decisions, they suggest, has been encouraged and developed by corporations looking to shut down discussions of deeper structural problems.
One of Chater and Loewenstein’s core examples is the so-called carbon footprint calculator, introduced by BP in 2002 as part of the company’s experimental rebranding (Beyond Petroleum). This calculator allowed consumers to measure their personal carbon emissions. It also, conveniently, reframed the climate problem as one determined by individual responsibility (e.g., biking to work instead of driving), rather than one that can be resolved only through large-scale policy change (e.g., regulating industrial carbon emissions). Chater and Loewenstein themselves acted as consultants and advisers on work dedicated to “nudging” consumers into smaller carbon footprints. Their change of heart reflects their honest recognition that the framing supplied by the petroleum industry misdirected scientists’ focus to research questions targeting consumers, rather than the real source of climate change — industrial carbon output.
As with the biomedical researchers before them, today’s social scientists have become the unwitting victims of corporate capture.
As with carbon pollution, so too with many other classic targets of nudge interventions. Nutrition, retirement savings, smoking cessation, prejudice, and so on would be more effectively tackled through structural change. Ambitious large-scale testing finds that nudges only reduce implicit prejudice for a few hours — and they don’t reduce explicit prejudice at all. A recent meta-analysis concludes that, while nudges have been popular (representing 76 percent of prejudice-reduction experiments over the past decade), these light-touch interventions are ill-suited to tackle the problem. The authors liken these interventions to cold remedies: The systemic roots of prejudice require stronger medicine.
The research on nudges could be completely unbiased in the sense that it provides true answers. But it is unquestionably biased in the sense that it causes scientists to effectively ignore the most powerful solutions to the problems they focus on. As with the biomedical researchers before them, today’s social scientists have become the unwitting victims of corporate capture.
Scientists taking responsibility for setting the agenda of their research doesn’t mean scientists are or must be activists. People rightly worry that science is being hijacked by political agendas, and the widespread perception of science as untrustworthy would seriously undermine its potential to play a positive role in society. Scientists should resist both corporate capture and excessive deference to social movements for the same reason: Science requires a certain measure of independence from broader political struggles. This independence is what ensures that science responds to evidence, and not just to political machinations.
But good science asks for more than simply gathering answers. This is doubly true when research must overcome elite indifference to pressing social problems. As Rose Abramoff, a climate scientist who was fired for protesting at the American Geophysical Union, put it: “The scientific community has tried writing dutiful reports for decades, with no reduction in greenhouse gas emissions from fossil fuels to show for it.”
If scientists and the public good are to control the agenda of scientific research, that will require redistributing the resources and decision-making power that reside with large granting institutions and private funders. Researchers now work on problems that allow them to successfully apply for grants representing the granters’ priorities — but not necessarily those of the scientific or broader community. More public funding for research would in itself make a sizeable dent in the problem. Changes to the structure of funding — for instance, replacing peer-reviewed grant processes with lotteries — might further encourage scientists to set their own agenda. One proposal involves a two-stage system, where an initial stage of peer review eliminates proposals that fail to meet minimal scientific standards, after which a computer-assisted process selects projects to fund at random in the second stage.
We could also modify the process. Canada’s Nunavik Research Centre provides an instructive example of “community-based science,” where scientists and locals work together to decide what questions are worth investigating. Inuit communities contribute financial resources to hire a health board for the research center, and communicate group problems in need of scientific study and monitoring to the center’s research scientists. Proposed studies are evaluated not only by scientists’ professional standards, but by local knowledge norms and ethical requirements — a consultation that continues during design, execution, and reporting of study results.
Science has only just begun to fully reckon with the risks of failing to set its own agenda. Moving forward, we can use the scientific process itself to help identify which questions are worth asking — and which financial, social, and political arrangements will allow us to ask them.