The social-media hive mind recently took a break from circulating humorous cat videos and set out on a new path: circulating humorous robot videos. In many ways, the robots were simply picking up where the cats had left off. Like cuddly animals, the robots in question shared certain features with humans, such as arms, legs, even faces. This almost-human quality seems to be a condition of many successful Internet memes, achieving a type of humor that transcends place or language.
But as you can see if you enter the words “robot” and “fail” into YouTube, the robot meme differs from the cat meme in one important way. While cats achieve fame through displaying an almost-human level of intelligence and cultural nous, robots provoke laughter through precisely the opposite.
Given various simple tasks, such as walking up a flight of stairs, squeezing ketchup onto a hot dog, or applying lipstick to the face of a mannequin, the robots perform abysmally. They keel over sideways; they bury the hot dog under a pile of ketchup; they violently attack the mannequin with lipstick. Stupid robots.
But “fails” aren’t always funny. In June, an auto worker at the Volkswagen plant in Kessel, Germany, died when a robot reportedly grabbed him and pressed him against a metal plate. “Human error” was blamed on that occasion — where else could moral condemnation have been directed?
The macabre tinge around robot fails is accompanied by anxiety, given widespread predictions of impending automation across the labor market. What separates these predictions from the often upbeat prophecies of the 1990s is that machines are now said to be coming for professional and creative jobs, and not merely manual labor or simple clerical tasks. This vision of imminent job eradication animates two new books — Jerry Kaplan’s Humans Need Not Apply (Yale University Press) and Martin Ford’s The Rise of the Robots (Basic Books).
There is a deep philosophical and moral unease lurking within our burgeoning obsession with the robots. Do we fear that robots are eventually going to be indistinguishable from us, or that they’re going to be radically different from us? The ambiguity of robots — which provokes mirth in the safe confines of YouTube videos — lies in the fact that they do exactly what we do, just not in any recognizable way.
While it is fun to laugh at a humanoid failing to climb a flight of stairs, robots are often most effective at human tasks when they least resemble us physically. Kaplan, an entrepreneur and fellow in legal informatics at Stanford, imagines a future in which house-painting is performed by a squadron of flying drones with sensors, doing a week’s work in a couple of hours.
In the realm of artificial intelligence (AI) things are more complicated. As recently as the dot-com boom, data-processing capacity was understood to only increase the value of human judgment and creativity. But these small islands of human distinction may soon become submerged within automation as well. “Machines are starting to demonstrate curiosity and creativity,” writes Ford, a software developer and Silicon Valley entrepreneur.
AI is shifting from narrow applications toward Artificial General Intelligence (AGI), in which computers will match all of the capabilities that we associate with our own minds. AGI represents the perfect mimicry of human thought, which some computer scientists believe will be achieved within 20 years. Ford is suspicious of apocalyptic talk of the singularity — the moment predicted by the computer scientist Ray Kurzweil and others in which the human mind transcends biology, and thereby achieves immortality — but is nevertheless nervous about the pace at which AI is advancing.
Both authors see a hint of this nervousness around Watson, the IBM computer that won on Jeopardy! in 2011. Unlike chess (which IBM famously mastered in the 1990s with Deep Blue), Jeopardy! requires cultural understanding, as questions often involve irony, wordplay, and unspoken insinuations. Computers are now performing precisely the tasks of intuition and interpretation that were once deemed beyond automation. The field of affective computing, which processes the emotions conveyed in speech or body language, is still a little clumsy, but it becomes less so each year.
Once machines can produce beautiful art, give rousing lectures, or hold engaging conversations, what could we do?
That computational power is expanding all the time is not news. Moore’s Law, which states that processing speed will double every two years, has turned out to be a decent approximation of the truth. Nevertheless, both Kaplan and Ford go to some lengths to ram home just how extraordinary and transformative any exponential progression is in practice. Moore’s Law helps us understand why the opportunities and threats posed by computing seem to change so rapidly. When machine capabilities are constantly doubling, comparisons between those capabilities and our natural, human ones can very quickly become obsolete.
And yet robots unnerve us not simply because they have superhuman power, but also because they seem capable of mimicking us so effectively. Here, Kaplan and Ford stress the importance of the neural-networking approach to AI, modeled on brain functionality. The principles of neural networking have been around since the 1940s, but the approach has only recently come to usurp the more established structured-programming method.
While structured programming involves the performance of given tasks, neural networking involves cumulative pattern recognition, with the result that a computer can learn from experience. Rather than seek to codify fixed rules for identifying the contents of a photograph, for example, a machine that uses neural networking would process millions of photographs, along with accompanying data, and learn how to identify the contents. As we continue to produce ever more data, so the power of neural networking increases. Neural networking is only modeled upon the brain, but its direction of progress is toward AGI — a perfect replica of the brain.
These books contain a stark message: Unless you happen to be the owner or manufacturer of robots, your job will eventually be at risk. Just as agricultural employment shrank drastically as a result of industrialization, so the majority of jobs in the West today are potentially unnecessary. During the 1980s and ’90s, when panic focused on the threat of offshoring, policy makers and economists pointed to the growing sectors of employment that could not be offshored, such as retail, health care, education, hospitality, and a vaguely defined cluster of creative and “knowledge-based” industries. If Kaplan and Ford are right, a combination of automation and further offshoring is going to shrink demand for workers in these sectors, too.
Journalism, law, and accountancy can already be performed on a basic level by computers. Computerized financial trading is transforming markets, with the unthinkable speeds at which it is done. Low-paid, low-productivity work, such as fruit-picking and fast-food service, has traditionally been viewed as the sort of labor that isn’t worth much capital investment. But they can now be performed automatically as well as cheaply — the fruit-picking robot can even test for ripeness as it goes.
What is equally startling is how few jobs are generated by the most successful corporations of the digital age. In 2012, Google generated profit of nearly $14 billion while employing fewer than 38,000 people. In 1979, General Motors had profits of $11 billion (in today’s money) while employing 840,000 people. Then there are the ironically named “sharing economy” companies, such as Uber, whose very business model depends on providing services without creating actual jobs. In the case of Uber, human drivers can be viewed as a mere stopgap on the road to full automation. Companies such as Amazon are constantly attentive to new routes to automation, like delivery via drone and “dark stores,” which employ no people at all, operating as fully automated distribution centers.
The Rise of the Robots is commendable for its careful economic analysis of these trends. Ford acknowledges that the fear of permanent job destruction is as old as industrialization itself, and that labor markets have always adapted and bounced back. He is attuned to the risks of exaggeration and extrapolation, which can lead us to blame technology for economic upheavals while ignoring other structural factors, such as the rise of finance.
Yet the conventional liberal economic argument (that productivity gains inevitably lead to a more efficient reallocation of spare resources) does not convince Ford. The economic gains that result from automation tend to have a winner-take-all character. The result is a quasi-feudal economy, in which the vast majority of people get to use digital services such as Google, and a tiny minority makes exceptional profits.
Kaplan adopts a somewhat different tone. Humans Need Not Apply professes to be concerned about where technology is heading, especially in terms of inequality, but it occasionally lapses into breathless Silicon Valley anecdotes when discussing those benefiting from such apparent injustice. Sentences such as “Jeff [Bezos, CEO of Amazon] makes more on a Saturday spent on the golf course than the other college grads in his foursome, taken together, will earn in their entire lifetimes” sound more fawning than appalled.
The two-page description of Kaplan’s own mansion — “Bored? You can sit in the graceful Adirondack chairs around the fire pit, play chess on a life-sized outdoor chess board, laze in the gazebo by the pool,” etc. — seems more than adequate to make the point that the Kaplans are not even in America’s top 1 percent. By contrast, “one of our friends owns seven residential properties, including a ranch, houses in Big Sur. … Paul Simon performed as the entertainment for one friend’s private birthday party.” OK, enough.
For those of us less well acquainted with the bleeding edge of Silicon Valley and MIT inventions, prophecies of socioeconomic apocalypse may provoke a different kind of skepticism. I couldn’t help occasionally wondering if these were some of the same forecasters who once promised us a knowledge economy staffed by well-paid “symbolic analysts.” Wasn’t human capital going to be the most valuable asset in this new, ideas-based economy?
Against such skepticism, Kaplan and Ford point us once again to the exponential rate of change and expansion in computing today. We are talking not about incremental productivity increases but about a fundamental restructuring of capitalism. Perhaps technology circa 1995 was facilitating a shift toward a knowledge-based labor market, in which humans would be employed to do precisely those things that computers cannot. But the pace of change is such that we’re no longer talking about the same technology today. This, of course, also suggests that 2015-style pessimism regarding job eradication could itself be outmoded within a few years.
The singularity, the dark store, the robot that crushed the auto worker against a metal plate — there are gothic undertones here, of machines becoming animated. These machines aren’t just promising to liberate us from drudgery — they’re promising to liberate us from work that is fulfilling and culturally expressive.
Since the Victorian era, the labor market has been the arena in which the virtues and injuries of capitalism can been seen. Classical economic liberals look at the labor market and see a platform for social mobility, one in which individual effort is matched by monetary reward. The neoliberals of the 20th century took this optimism further still, adding the notion of human capital — that people could augment themselves through education or self-branding so as to increase their own value in the market.
The labor market is no less pivotal to Marxist analyses of capitalism. The treatment of labor as a commodity, to be bought and sold on a market, is what allows capitalists to acquire more value from workers than they actually pay for, which explains the accrual of profits. Without labor, there could be no value. Without labor markets, there could be no capitalism.
Marx liked to depict capital as a vampire that sucked the blood from living labor. But the fantasy of fully automated capitalism contains a different monstrosity altogether: the zombie that no longer needs us at all. As the economist Joan Robinson has written, if there is one thing worse than being exploited by capital, it is not being exploited by capital. The vision that Kaplan and Ford put before us is of a world in which machines don’t even bother to extract value from us any longer — they’re too busy trading with one another. What might capitalism look like if labor markets lose their political centrality? Would this even be capitalism?
This question lurks in Thomas Piketty’s Capital, which highlights how the inheritance of capital is a far more effective route to riches than the exertion of effort in the workplace. Piketty’s account forces us to pay attention to the family as a source of income — work is an increasingly unlikely path to acquiring wealth.
Theories of financialization, such as those of the economist Costas Lapavitsas or the sociologist Greta Krippner, point in a similar direction, showing how firms have deliberately sought to shift away from productive activities and toward balance-sheet manipulation and financial innovations as sources of profit. The vaudevillian horror show of machines broken free from human control is mirrored in the anxieties of contemporary political economy. The specter of autonomous machines is also the specter of autonomous capital, no longer anchored in society via the wage relation.
The neoliberal ploy that each individual be treated as a chunk of capital was present in the discourse of the 1990s “knowledge economy,” and the answer to virtually any economic policy question was “education.” This no longer feels adequate. As Kaplan and Ford point out, the market value of most qualifications is diminishing all the time. Given the possible scale of automation, Ford argues, the idea that education can achieve prosperity for all is like “believing that, in the wake of the mechanization of agriculture, the majority of displaced farmworkers would be able to find jobs driving tractors.”
An economy in which capital has replaced labor may witness the rise of a few thousand well-paid YouTube stars, but it would also feature a promulgation of unpaid internships, adults living off their parents, and unpaid workfare contracts. As Ford points out, even where humans are cheaper than robots to employ, there are various reasons that automation may nevertheless be preferable. Robots bring less baggage than people. The prospects for inequality under these conditions are terrifying.
Kaplan and Ford believe that technology can deliver social value, if we adapt our institutional structures and policies. To this end, their policy prescriptions are ambitious and idealistic. Kaplan outlines a tax instrument that would create incentives for corporations to share ownership as widely as possible, distributing the gains from this new economy more equally. He also suggests a form of asset-based welfare, in which the benefits of ownership, and not just cash, are better distributed through society. Following in a liberal tradition, this is effectively a demand that the government deal with the oligarchies of capitalism by producing more capitalists.
Ford advocates a basic income guarantee, an idea that is accumulating support right now. If the labor market will not provide the income that people need, some other institution will be required to take its place. He makes the case well, dismissing the simplistic policy narrative that people need to be cajoled and incentivized to work or else the economy will grind to a halt. On the contrary, neoliberal economies seem to be teeming with people wanting to do fulfilling and creative things but struggling to get paid for them.
The chance of such policy ideas being adopted is slim at best. As with Piketty’s proposal for a global wealth tax, they require not only greater political coordination than seems available right now, but also a wholesale inversion of policy orthodoxy. Neoclassical economics, which provides the basis for so much policy, starts from the assumption that resources and time are scarce. Hence the curiosity that as our national productive capacity swells from year to year, political discourse seems ever more fixated on constraints and cuts.
With policy making locked into a supply-side straitjacket, we lack the collective capacity to divert money into the pockets of those who are likely to spend it. Our nonsensical orthodoxy states that as long as consumer goods are getting cheaper, everybody benefits, a view of humanity as essentially passive. Is it conceivable that Silicon Valley insiders such as Kaplan and Ford might begin to challenge this miserable emphasis on austerity and incentives, to refocus on a sharing of surpluses? It certainly seems more likely right now than an uprising from the Left.
But what of the more diffuse fear, the fear that we are simply not necessary any longer? Once machines can produce beautiful art, give rousing lectures, or hold engaging conversations, what could we do, even in a society of shared ownership or a basic income? This is where hope depends on Kaplan and Ford being wrong.
For all the examples of AGI, of machines demonstrating curiosity and creativity, there is an unavoidable sense in which the robots can’t understand what they’re doing. Their inability to complain, which is precisely what makes them attractive to the likes of Uber and Amazon, is also what renders them somewhat stupid after all. They are locked into what Max Weber termed instrumental rationality. Endlessly performing, relentlessly producing, they are incapable of ever saying “enough’s enough.”
In this they hold up a daunting mirror for us to look in. They represent an impossible benchmark of success and efficiency, one that recedes so far into the distance ahead that the only sane response is to abandon the idea of humans as capital altogether. At a time when people are willingly plastering their bodies with wearable technology in self-optimization efforts, this prospect may be no more imminent than that of a guaranteed basic income. Perhaps, then, we should start with a form of human distinction that is already familiar to us via the robot-fail videos: that we are capable of laughing at them, but not vice versa.
William Davies is a senior lecturer in politics at Goldsmiths, University of London. He is the author of The Happiness Industry (Verso, 2015).