Back in 2013, another in a long line of tussles over scientism broke out. Leon Wieseltier, literary editor of The New Republic, told humanities majors at a Brandeis University graduation ceremony that they represented “the resistance” in a society dominated by “the twin imperialisms of science and technology.” Wieseltier sounded all the familiar themes — the enslavement of human beings to machines, the tyranny of numbers, the depredations of “technologism,” the unchallenged dominance of “utility, speed, efficiency, and convenience” in modern culture. The antidote, he claimed, was the humanities.
The evolutionary psychologist Steven Pinker fired back. Petulant humanists, he charged, welcomed science when it cured disease but not when it impinged on their professional fiefdom. The march of science and the Enlightenment had vastly improved the human condition. Only science, Pinker insisted, could address “the deepest questions about who we are, where we came from, and how we define the meaning and purpose of our lives.” Humanities scholars would remain irrelevant until they embraced the scientifically informed humanitarianism that constituted the “de facto morality” of the modern world. The ensuing controversy stretched through that summer and fall.
We will struggle to address the vexing questions of the 21st century if we continue to use the blunt interpretive tools of the 19th and 20th centuries.
Today a global pandemic grips the world. Societies face immediate, practical, life-or-death questions about how to incorporate science and expertise into their collective decisions. And yet the old refrains can still be heard. In Commentary, the conservative commentator Sohrab Ahmari argued that “the ideology of scientism” has plunged the world into “a half-millennial funk.” In the face of a deadly virus, Ahmari wrote, moderns lack any sense of why “life is worth living and passing on”; they cannot even assert that “being is preferable to nonbeing.” Pinker chimed in as well, arguing that political decisions favoring economic well-being over bodily health reflected the “malignant delusion” of evangelicals’ “belief in an afterlife,” which “devalues actual lives.”
And so the tired, decades-old pattern continues. Bitter public controversies swirl around climate change, intelligent design, genetically modified foods, vaccines, data mining, and dozens of other issues. In response, cultural critics reiterate their familiar positions — usually the lament that a soulless science dominates modern life or the fear that a rising tide of unreason will return humanity to the dark ages. Abstractions abound, as commenters invoke science, scientism, rationalism, the Enlightenment, the humanities, humanism, religion, faith, irrationality, the West, and modernity.
Such large-scale abstractions have proven remarkably unhelpful. Each of the issues we face has its own distinctive contours, its own complex interrelations among science and social norms, practices, and institutions. Despite the fiery statements of combatants and the worries of onlookers, the scientific enterprise as a whole is not at stake in debates over vaccination, genetic engineering, or climate change. Rather, these controversies involve particular scientific findings, theories, techniques, devices, and practices, as they relate to the deeply held (and often directly conflicting) values of many different groups.
We will struggle to address the vexing questions of the 21st century if we continue to use the blunt interpretive tools of the 19th and 20th centuries. Those tools were forged in polemics between clashing cultural elites over science’s extension into new domains — first into the history of life on earth and then into human relations — and its place in high schools and colleges. Blanket injunctions to place our trust in science, or religion, or the humanities, or any other broad framework, offer remarkably little guidance on how to respond to the social possibilities raised by particular scientific innovations.
By the mid-20th century, a remarkably broad array of religious leaders, along with humanities scholars, political conservatives, many natural scientists, and groups of dissident social scientists and secular progressives, blamed the problems of the modern world on a pervasive moral blight. They traced that problem, in turn, to misguided attempts to apply scientific methods to the morally charged domain of human action. The resulting images of science as an ethically impotent, culturally threatening force inflected the revolts of the 1960s, which powerfully reinforced the association of science with a technocratic form of liberalism. Although the contours of such images have shifted since then, they help to explain why many Americans see modern science as an alien cultural presence, despite its equally strong associations with technological progress and economic growth.
The challenges to scientific authority that have circulated in the United States since the 1920s are not wrong in every detail. Science is a messy, thoroughly human enterprise that does not, and cannot, address many of the social and moral issues we face. But many critics have tied this appropriate skepticism to extravagant claims about scientists’ ambitions for the future and influence in the present.
For all of its insights, today’s academic left also inherits many of these bad interpretive habits. By the 1980s, poststructuralists argued that social change would require dismantling not only the prevailing worldview but also the underlying sense that one worldview would work for everyone. There were no final answers, only people in conflict, waging their struggles on domains that ranged from the highest reaches of philosophical abstraction to the lowest, most mundane forms of daily activity.
Poststructural attacks on the universal values proposed by the postwar generation broadened into a critique of the whole idea of universals. Claims to universality were held merely to represent exertions of power. The campaign against universalism identified science as a kind of metapower, a uniquely potent weapon for disarming those who would resist other operations of power. This greatly influenced how scholars thought about science and its social meanings in the late 20th century. Foucaultian conceptions of “power/knowledge,” the rejection of essentialism and universalism, and poststructuralism’s assertions about the centrality of conflict all converged in a full-throated challenge to the conventional understanding of science.
These commitments shaped the academic left as its influence grew in the 1980s and 1990s. New styles of criticism joined with old, and poststructuralists denied that anyone could attain what the philosopher Thomas Nagel famously termed the “view from nowhere” and the feminist scholar Donna Haraway called the “god trick.” Even the forms of bottom-up objectivity espoused by many Marxists and by feminists in Sandra Harding’s vein ran afoul of this critique. Haraway sought to ground the capacity for genuine insight in self-consciously partial viewpoints. With no single, liberatory framework available, she argued, an array of “situated knowledges” offered the only alternative to the false objectivity of mainstream science.
As the presumption of universalism — and thus of a common moral framework — faded, a new emphasis on difference prevailed on the academic left. “The postmodern worldview entails the dissipation of objectivity,” wrote Zygmunt Bauman, with “the slow erosion of the dominance once enjoyed by science over the whole field of (legitimate) knowledge” leading to the emergence of multiple, competing systems of truth.
There is much to be said for such accounts. Like so many critics before them, however, poststructural theorists have often tied common-sense arguments about the character and limits of scientific knowledge to sweeping, reductive portraits of its hegemonic influence in the modern world. They have asserted that science as a whole claims the ability to answer all questions and solve all problems. They have also contended that science reigns supreme in modern societies, determining the basic contours of our thought. Finally, they have traced a host of specific social problems to science’s cultural influence. In doing so, they have echoed generations of religious, humanistic, and conservative critics with very different social visions than their own.
The second of these assumptions — that science sets the tone of modern culture — anchors the rest of the argument and deserves special scrutiny. If science is not culturally dominant, then it cannot have caused the litany of ills often laid at its door. Is science truly so influential? Or do we blame it instinctively? What, exactly, is scientific about our world?
In the United States, science does serve important public functions. Biologists and physicists wield forms of authority in courtrooms that religious leaders and literary critics do not possess. The Federal Reserve draws on economic experts, not the Bible or Melville. The Environmental Protection Agency takes cues from the natural sciences, the Department of Education from the social sciences. Public high schools can teach Darwinism but not creationism or intelligent design. Looking at these instances, we might conclude that science enjoys a unique position of privilege in American public culture.
Is it really the case that secular institutions and practices reflect the cultural dominance of science?
Yet other experts also share in such privileges. We constantly rely on the knowledge of historians, journalists, jurists, and eyewitnesses, among others, even though we do not consider their work scientific. Science’s elevated position turns out to be partly a matter of selective exclusion: In keeping with the First Amendment, public institutions in the United States refrain — rightly or wrongly, consistently or inconsistently — from treating theological tenets as established truths. Meanwhile, even the most fervent champions of literature and the arts have rarely claimed that they offer forms of knowledge that should be used in courtrooms or policy decisions.
Ironically, the detachment of public institutions from religious agencies has made it far easier for religious and humanistic critics to press their cases against science, although it has sometimes hindered their ability to find a hearing as well. As religious communities became more tolerant of one another, the shared enemy — formerly heathenism, now materialism, naturalism, or secularism — presented an obvious target. Surely, the thinking goes, secular institutions result from secular philosophies, and surely science produces those philosophies. As scientists ratcheted up their claims of value-neutrality, more and more critics drew a causal connection. Science, they argued, had brought forth a world dominated by shallow, materialistic values, by a purely instrumental mind-set that obscures the very existence of values, or perhaps by the values of a hegemonic social group, painted with the brush of neutrality.
Is it really the case, however, that secular institutions and practices reflect the cultural dominance of science? Some countries have witnessed the active imposition of “scientific” worldviews by militantly secular regimes. Yet even in such cases the institutionalized set of knowledge practices that constitute science did not necessarily align with the philosophies marching under its banner. In the United States, the relations between science, philosophy, and secularization have been especially complex and indirect. It is a gross simplification to claim, as the religious-studies scholar Huston Smith has, that science “has authored our world,” or, as Alasdair MacIntyre has, that contemporary social life is largely “the concrete and dramatic re-enactment of 18th-century philosophy.”
Many features of the modern world are secular but not scientific. Law, bureaucracy, capitalism, consumerism, journalism, education, sports: These spheres, like many others, reflect in part the waning control of religious institutions. But they do not share a single, common philosophical foundation with science. Like all social formations, each takes much of its shape from age-old human traits and from conflicts between particular groups. And each, in turn, generates a distinctive array of cultural assumptions, values, and behaviors.
It is time to get past the polemics, acknowledge that science is a central feature of our world, and decide what we will make of it.
This is not to say that foundational presuppositions are irrelevant. Most practices and institutions in the modern West seem pointless or even harmful to those who assume that a deity determines our worldly fortunes, that our earthly actions matter primarily in relation to our otherworldly fate, or both. The typical patterns of exertion in modern societies fit much better with the view that one’s earthly well-being largely reflects one’s earthly actions, and that these actions matter primarily for that reason.
Although the spread of that emphasis on the here and now has been highly consequential, it is neither secular nor scientific in itself. Of course, it is typical among nontheists, although some have deemed human action essentially meaningless. But it also comports with a broad range of religious understandings, even as it clashes with others. Indeed, one of the points of contention in many debates around science and modernity is the legitimacy of these comparatively worldly forms of religion.
There are important questions at stake here, with real consequences. Is there a God that intervenes in our affairs? Should our educational system emphasize biology, literature, or the Bible? The answers matter a great deal. But how we frame our arguments also matters. It is unjust and socially harmful to push all the results of human frailty onto our opponents’ ledgers. That approach breeds resentment and distrust, including skepticism toward our own cultural programs when it becomes clear that we have vastly overstated the real-world effects of the competing perspectives — and that ours is no cure for human foibles either.
Meanwhile, tracing social problems back to philosophical disagreements leaves us unable to address those problems themselves, both by misrepresenting their main causes and by convincing us that we must litigate our intellectual conflicts before we can take meaningful action. Without overstating the degree of shared ground, we should work to build coalitions where possible, even if we believe our own views will shine through in the end. Many of the ideas that profoundly shape social behavior — ideas about racial equality, for example, or the need for economic regulation — cut across religious and secular perspectives. Our competing patterns of apologetics should not forestall collective action in these areas.
Science’s champions, like its critics, have often gone to absurd lengths to discredit worldviews they considered harmful. But here it is important to distinguish between science as a set of practices and institutions and the philosophies that have gone under its name. Science’s critics often take the salutary step of differentiating science from the philosophies of materialism, naturalism, positivism, and so on. But they do so in a manner that places highly valued scientific practices on their own side of the philosophical line and blames the ills of the world on science-oriented outlooks. We would be wise to take a fairer approach.
Such an approach would insist on differentiating levels of analysis. There are three separate stories to be told: One about the social roles of scientific practices and findings, a second about the trajectories and entanglements of science-inspired philosophies, and a third about the development of secular patterns and institutions. To insist on such differentiation is not to claim that science is intrinsically pure, operating in glorious isolation from the human world. Yet distinguishing between science, philosophy, and the secular, rather than conflating them, would allow us to understand their historical entanglements more clearly — and to grapple more effectively with the implications of empirical research in our own day.
Over time, in fact, a more charitable and nuanced assessment of science might help us liberate researchers from the extravagant assertions of disinterestedness that envelop their work. It is not their claims alone, but also the arguments and actions of many other groups, that have trapped scientists in the cage of absolute value-neutrality. Critics often declare that science eschews considerations of value, in order to blame it for doing so. Some even contend that genuine science provides absolutely certain knowledge — not models, not probabilities, not calculations of risk — and must do so before we can act on it. Most of us are complicit in a version of this: When confronted with research we dislike, we demand that the researchers in question demonstrate their complete disinterestedness before we will take them seriously.
This cycle must be broken if we are to recognize science for what it truly is: a thoroughly human practice like any other, yet one that produces remarkable outcomes. Rather than arguing that science’s validity depends on the personal neutrality of individual researchers, we could instead learn to value scientific findings for their reliability. Indeed, we could improve scientific procedures by adding new features, such as citizen participation, to help ensure the reliability of the results. This may prove impossible if critics continue to view science as a monstrous cultural presence and blame it for humanity’s faults, rather than simply assessing its strengths and weaknesses. It is time to get past the polemics, acknowledge that science is a central feature of our world, and decide what we will make of it.
This essay is adapted from the author’s new book, Science under Fire: Challenges to Scientific Authority in Modern America (Harvard University Press).