In July 1969—less than two weeks after Neil Armstrong and Buzz Aldrin cavorted on the moon—The New York Review of Books published a controversial essay by John McDermott, “Technology: The Opiate of the Intellectuals.” In it, McDermott, a social scientist at the State University of New York, offered a sharp rejoinder to those who viewed the Apollo 11 mission as a harbinger of ever-more-ambitious technological triumphs. The essay was in response to a report put out by Harvard University’s Program on Technology and Society, a grand interdisciplinary effort bankrolled to the tune of several million dollars by IBM. The Harvard report was sanguine, arguing, in McDermott’s words, that “technological innovation exhibits a distinct tendency to work for the general welfare in the long run.” McDermott was having none of this “extravagant optimism.”
The prevailing belief of technologists—in McDermott’s time and ours—is that technology is the solution to all problems. It is a view especially attractive to those best positioned to reap the benefits of innovation and avoid its unattractive consequences.
Technology comforts, surrounds, and confounds us. When we argue about MOOCs, hydraulic fracturing, NSA surveillance, or drone warfare, we’re arguing about technology. Unfortunately, the conversation is impoverished by the absence of a robust cadre of scholars who can engage with and critique the role of technology in society. Instead, we have the glib boosterism of tech intellectuals like the former Wired editor Chris Anderson, the media gadfly (and CUNY journalism professor) Jeff Jarvis, the British writer Andrew Keen, and the Google executive Eric Schmidt. A fairly homogenous group of white men with elite degrees inclined to champion innovation, disruption, and the free market, these tech intellectuals have usurped the role of explaining technology to policy makers, investors, and the public. Their arguments and advocacy are too often a tepid substitute for robust analysis and honest critique.
The public debate both fosters and reflects an intellectual monoculture that favors Silicon Valley’s corporate point of view.
Today’s tech intellectuals are expert at picking buzzword-laden arguments, sometimes with one another. These narrow exchanges polarize debate and give off the whiff of self-interest. More attention generates more Internet traffic, higher speaker fees, and lucrative book contracts.
There are three main problems with how many tech intellectuals frame public debate about technology. First, they both foster and reflect an intellectual monoculture that favors Silicon Valley’s corporate point of view. In other words, innovation is almost always championed. Consider Jarvis’s 2011 book, Public Parts (Simon & Schuster), which stands in for much of the tech-intellectual genre writing. One doesn’t have to venture much beyond his subtitle—How Sharing in the Digital Age Improves the Way We Work and Live—to see that corporate innovation is something to be praised, not probed.
A similar contribution is The New Digital Age: Reshaping the Future of People, Nations, and Business (Knopf, 2013), by Schmidt and Jared Cohen, a former State Department official turned Google exec. The two men wield enormous influence not just in Silicon Valley but also in Washington. Schmidt stirred headlines when he was asked about whether Google’s users should have concerns about sharing information with the company. “If you have something that you don’t want anyone to know,” Schmidt said, “maybe you shouldn’t be doing it in the first place.” Although Schmidt and Cohen acknowledge that digital technologies can lead to “dreadful evil,” they are in favor of more and more “connectivity.” Not discussed in any serious manner are the ways in which the corporations that provide those services will benefit.
Rarely do today’s most prominent tech intellectuals question the overall value of innovation. Why would they? They are in the innovation business, part of a corporate culture that boasts of its ability to thrive, not just survive, in a climate of constant disruption. But is innovation always beneficial? Plainly not. For starters, it can destroy jobs. From the 1920s through the 1950s, automation displaced tens of thousands of workers. Recall the conflict between Spencer Tracy (a proponent of automation) and Katharine Hepburn (an anxious reference librarian) in the 1957 film Desk Set.
What about broader societal benefits—in health care, for instance? Stanley Joel Reiser, a clinical professor of health policy at George Washington University, undertook a study of recent medical innovations. He concluded that patients are not always the winners, but rather hospital administrators, physicians, or Big Pharma. An especially compelling example he gave is the invention of the artificial respirator: While saving countless lives, this medical innovation also created ethical, legal, and policy debates over, literally, questions of life and death. Moreover, there is the broader question of whether it is better to spend large amounts of money on research and development that will benefit future generations if this means less funding to meet today’s medical needs with existing methods.
A second key flaw is that tech intellectuals often ignore what is most central to understanding technology: its thinginess. Technology involves stuff—silicon, plastic, rare-earth minerals mined in Bolivia or China. This point would not be lost on iPhone assemblers in Shenzhen. Even if we limit ourselves to the world of digital devices—a common circumscription among tech intellectuals—we must recognize the profound environmental impact. According to the Silicon Valley Toxics Coalition, Santa Clara County—where Intel was born and where Google and Facebook now call home—hosts the largest concentration of Superfund sites in the United States. More than 20 sites remain heavily polluted with the byproducts of semiconductors and other high-tech commodities. As the SVTC notes, tens of thousands of residents—primarily lower-income and minority—"live, work, and go to school near or right on top of these polluted areas.”
And the environmental impact of the digital world extends far beyond Silicon Valley. Those server farms that make cloud computing possible aren’t powered by good intentions. According to Nathan Ensmenger, a historian of computing at Indiana University at Bloomington, the equivalent of some 30 nuclear power plants is needed to keep up with the world’s uploading and tweeting. Furthermore, server farms require water, just like real farms. A single data center might pump hundreds of thousands of gallons of chilled water through its facilities every day for coolant. Meanwhile, the detritus of our disembodied digital lives presents very real problems in developing countries, where so much of it ends up. The “disposable” smartphone that we discard at the end of our telecom contract will very likely end up in a landfill outside a slum in Ghana.
A third and final problem is rooted in the previous two: Many of today’s prominent tech intellectuals are more inclined to celebrate technology than deeply question it. (There are exceptions. Evgeny Morozov is an especially voracious critic of Silicon Valley’s fixation on innovation. Unfortunately, his preferred mode of attack—what an essay in Democracy by the political scientist Henry Farrell called “troll and response"—generates more heat than light.) Their writings and TED talks have not nearly enough in common with the critiques from scholar-intellectuals like McDermott or David Noble, the late historian of labor and technology.
In his now-classic book, Forces of Production (1984), Noble argued that Cold War military priorities and corporate concerns shaped industrial automation. As a result, sophisticated machine tools took precedence over more labor-friendly but lower-tech options despite the latter’s economic benefits. Unlike most tech intellectuals, Noble questioned the motives of corporations (and their academic allies) in promoting innovation, something he certainly did not see as an unalloyed benefit.
To be sure, Noble was a polarizing figure. His sometimes strident Marxism was never fully embraced by the academy. He once remarked that his first book, America by Design (1977), got him hired at MIT while Forces of Production got him fired. Few tech intellectuals active today have much to say about how hyped innovations like 3-D printing, nanotechnology, or driverless cars will affect the work force of the future.
Back in age of Apollo, scholar-critics like McDermott branded technology an opiate, one that seduced intellectuals with the idea that social and political problems could be solved with a technological fix. A similar observation applies today. We are dulled, lulled, and anesthetized by arguments from tech intellectuals that too often are glib, Panglossian, or in service of a corporate agenda. What this discourse needs is more-rigorous academic expertise on the history and social implications of technological change. But first we have to stop nodding off in the tech intellectuals’ opium dens.
W. Patrick McCray is a professor of history at the University of California at Santa Barbara. He is the author, most recently, of The Visioneers: How a Group of Elite Scientists Pursued Space Colonies, Nanotechnologies, and a Limitless Future (Princeton University Press, 2013).