Sam Ogden, Science Source
A magazine ad campaign running in my hometown quotes a youngster who wants to study computer science, he says, so he can “invent a robot that will make his bed for him.” I admire the focus of this future genius. I, too, remember how the enforced daily reconstruction of my bed—an order destined only for destruction later that very day—somehow combined the worst aspects of futility, drudgery, and boredom that attended all household chores. By comparison, doing the dishes or raking the yard stood out as tasks that glimmered with teleological energy, activities that, if not exactly creative, at least smacked of purpose.
Disregarding for the moment whether an adult computer scientist will have the same attitude toward bed-making as his past, oppressed self, the dream of being freed from a chore, or any undesired task, by a constructed entity is of distinguished vintage. Robot-butlers or robot-maids—also robot-spouses and robot-lovers—have animated the pages of science fiction for more than a century. These visions extend the dream-logic of all technology, namely that it should make our lives easier and more fun. At the same time, the consequences of creating a robot working class have always had a dark side.
The basic problem is that the robot helper is also scary. Indeed, a primal fear of the constructed other reaches further back in literary and cultural memory than science fiction’s heyday, encompassing the golem legend as much as Mary Shelley’s modern Prometheus, Frankenstein, and his monster. At least since Karel Capek’s 1920 play R.U.R.—the work that is believed to have introduced “robot” into English—the most common fear associated with the robotic worker has been political, namely that the mechanical or cloned proletariat, though once accepting of their untermenschlich status as labor-savers for us, enablers of our leisure, will revolt.
“Work is of two kinds,” Bertrand Russell notes in his essay “In Praise of Idleness": “first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and ill paid; the second is pleasant and highly paid.” On this view, the robot is revealed as the mechanical realization of our desire to avoid work of the first kind while indulging a leisurely version of the second kind, a sort of generalized Downton Abbey fantasyland in which everyone employs servants who cook our meals, tend our gardens, help us dress, and—yes—make our beds.
Even here, one might immediately wonder whether the price of nonhuman servants might prove, as with human ones, prohibitively high for many. And what about those humans who are put out of work forever by a damn machine willing to work for less, and with only a warranty plan in place of health insurance?
In Capek’s R.U.R., the costs are of a different kind. The products of Rossum’s Universal Robots rise up against their human owners and extinguish them from the earth. Versions of this scenario have proliferated almost without end in the nine decades since, spawning everything from the soft menace of HAL 9000 apologizing about his inability to open the pod bay doors to the Schwarzenegger-enfleshed titanium frame of the Terminator series laying waste to the carbon-only inhabitants of California. It was no mistake that Isaac Asimov structured his Three Laws of Robotics in a superordinate nest: (1) “a robot may not injure a human being or, through inaction, allow a human being to come to harm”; (2) “a robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law”; and (3) “a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
We have always sensed that free time, time not dedicated to a specific purpose, is dangerous because it implicitly raises the question of what to do with it.
We should enter two caveats right away. One: Most robotic advances so far made in the real world do not involve android or even generalized machines. Instead, we have medical testing devices, spaceborne arms, roaming vacuum cleaners, and nanobot body implants. Two: Rather than maintaining some clear line between human and robot, the techno-future is very likely to belong to the cyborg. That is, the permeable admixture of flesh, technology, and culture, already a prominent feature of everyday life, will continue and increase. We are all cyborgs now. Think of your phone: Technology doesn’t have to be implanted to change the body, its sensorium, and the scope of one’s world.
And yet, the fear of the artificial other remains strong, especially when it comes to functional robots in android form. As with drugs and tools, that which is strong enough to help is likewise strong enough to harm. Homer Simpson, rejoicing in a brief dream sequence that he has been able to replace his nagging wife Marge with a mechanical version, Marge-Bot, watches his dream self gunned down in a hail of bullets from the large automatic weapon wielded by his clanking, riveted creation. “Why did I ever give her a gun?” real Homer wonders.
Your sex-slave today may be your executioner tomorrow. In some cases—the Cylons of the recent Battlestar Galactica reboot—there is no discernible difference between humans and nonhumans, generating a pervasive Fifth Column paranoia, or maybe speciesist bigotry, that reaches its vertiginous existential endgame with deep-cover robots who may not even know they are robots.
Now the fear, mistrust, and anger begin to flow in both directions. Sooner or later, any robot regime will demand to be set in terms of social justice, surplus value, and the division of labor. Nobody, whatever the circumstances of creation or the basic material composition of existence, likes to be exploited. True, exploitation has to be felt to be resisted: One of the most haunting things about Kazuo Ishiguro’s novel Never Let Me Go is how childishly hopeful the cloned young people remain about their lives, even as they submit to the system of organ-harvesting that is the reason for their being alive in the first place.
Once a feeling of exploitation is aroused, however, the consequences can be swift. What lives and thinks, whether carbon- or iron-based, is capable of existential suffering and its frequent companion, righteous indignation at the thought of mortality. Just ask Roy Batty, the Nexus-6 replicant who tearfully murders his maker, Dr. Eldon Tyrell, in Ridley Scott’s Blade Runner, by driving his thumbs into the genius’s eye sockets. (The Tyrell Corporation’s motto: “More human than human.”) This movie ends, significantly, with hand-to-hand combat between Batty and Rick Deckard, the state-sponsored assassin who (a) is in love with a replicant who didn’t know she was one and (b) may be a replicant himself. (Here we see most clearly the phildickian origins of the material.)
Generalized across a population of robotic or otherwise manufactured workers, these same all-too-human emotions can become the basis of that specific kind of awareness known as class consciousness. A revolt of the clones or the androids is no less imaginable, indeed might be even more plausible in a future world, than a wage-slave rebellion or a national liberation movement. Cloned, built, or born—what, after all, is the essential difference when there is consciousness, and hence desire, in all three? Ecce robo. We may not bleed when you prick us; but if you wrong us, shall we not revenge?
As so often, the price of freedom is eternal vigilance. The robots, like the rabble, must be kept in their place. But there are yet other worries hidden in the regime of leisure gained by offloading tasks to the robo-serfs, and they are even more troubling.
If you asked the bed-making-hating young man, I’m sure he would tell you that anything is preferable to performing the chore, up to and including the great adolescent activity of doing nothing. A recent Bruno Mars song in praise of laziness sketches how the height of happiness is reached by, among other nonactivities, staring at the fan and chilling on a couch in a Snuggie. (Yes, there is also some sex involved later.) This may sound like bliss when you’re resenting obligations or tired of your job, but its pleasures rapidly pale. You don’t have to be a idle-hands-are-devil’s-work Puritan—or even my own mother, who made us clean the entire house every Saturday morning so we could not watch cartoons on TV—to realize that too much nothing can be bad for you.
We have always sensed that free time, time not dedicated to a specific purpose, is dangerous because it implicitly raises the question of what to do with it, and that in turn opens the door to the greatest of life mysteries: why we do anything at all. Thorstein Veblen was right to see, in The Theory of the Leisure Class, not only that leisure time offered the perfect status demonstration of not having to work, that ultimate nonmaterial luxury good in a world filled with things, but also that, in thus joining leisure to conspicuous consumption of other luxuries, a person with free time and money could endlessly trapeze above the yawning abyss of existential reflection. With the alchemy of competitive social position governing one’s leisure, there is no need ever to look beyond the art collection, the fashion parade, the ostentatious sitting about in luxe cafes and restaurants, no need to confront one’s mortality or the fleeting banality of one’s experience thereof.
Even if many of us today would cry foul at being considered a leisure class in Veblen’s sense, there is still a pervasive energy of avoidance in our so-called leisure activities. For the most part, these are carved out of an otherwise work-dominated life, and increasingly there is a more permeable boundary between the two parts. One no longer lives for the weekend, since YouTube videos can be screened in spare moments at the office, and memos can be written on smartphones while watching a basketball game on TV over the weekend. What the French call la perruque—the soft pilfering of paid work time to perform one’s own private tasks—is now the norm in almost every workplace.
Stories about the lost productivity associated with this form of work-avoidance come and go without securing any real traction on the governing spirit of the work world. The reason is simple. Despite the prevalence of YouTubing and Facebooking while at work—also Pinterest-updating and Buzzfeed-sharing—bosses remain largely unconcerned; they know that the comprehensive presence of tasks and deadlines in all corners of life easily balances off any moments spent updating Facebook while at a desk. In fact, the whole idea of the slacker and of slacking smacks of pre-Great Recession luxury, when avoiding work or settling for nothing jobs in order to spend more time thinking up good chord progressions or T-shirt slogans was a lifestyle choice.
The irony of the slacker is that he or she is still dominated by work, as precisely that activity which must be avoided, and so only serves to reinforce the dominant values of the economy. Nowadays slacking is a mostly untenable option anyway, since even the crap jobs—grinding beans or demonstrating game-console features—are being snapped up by highly motivated people with good degrees and lots of extracurricular credits on their résumés. Too bad for them; but even worse for today’s would-be slackers, who are iced out of the niche occupations that a half-generation earlier supported the artistic ambitions of the mildly resistant.
It is still worth distinguishing between the slacker, of any description, and the idler. Slacking lacks a commitment to an alternative scale of value. By contrast, the genius of the genuine idler, whether as described by Diogenes or Jerome K. Jerome, is that he or she is not interested in work at all, but instead devoted to something else. What that something else involves is actually less important than the structural defection from the values of working. In other words, idling might involve lots of activity, even what appears to be effort; but the essential difference is that the idler does whatever he or she does in a spirit of infinite and cheerful uselessness that is found in all forms of play.
Idling at once poses a challenge to the reductive, utilitarian norms that otherwise govern too much of human activity and provides an answer—or at least the beginning of one—to the question of life’s true purpose. It is not too much to suggest that being idle, in the sense of enjoying one’s open-ended time without thought of any specific purpose or end, is the highest form of human existence. This is, to use Aristotelian language, the part of ourselves that is closest to the divine, and thus offers a glimpse of immortality. To be sure, from this Olympian vantage we may spy new purposes and projects to pursue in our more workaday lives; but the value of these projects, and the higher value from which these are judged, can be felt only when we slip the bonds of use.
Naturally something so essential to life can be easy to describe and yet surpassingly difficult to achieve. To take just the example most proximate to our current shared consciousness—I mean the experience you are having reading these words—I can tell you that I am writing them, on a deadline, while taking a train trip to deliver a keynote lecture. The trip was arranged months ago, with time carved out of my teaching schedule and the usual grid of meetings with students, colleagues, committees, and administrators that marks the week of any moderately busy university professor. I say nothing of the other obligations, social and cultural, the reading I need to do for next week’s seminars, the papers that must be graded, and so on.
Believe me, I am well aware of, and feel blessed by, the fact that my job is itself arguably an enjoyable and rewarding form of idling. I also know how lucky I am to have luxuries such as taking a train journey in the first place—though I confess that the train was chosen in part because it creates more productive time than traveling by the ostensibly more efficient air route. (I just checked my e-mail again, using the train’s Wi-Fi connection.)
This is not a complaint; it is, rather, a confession of the difficulties lurking in all forms of work, even the most enjoyable ones. In fact, the more freely chosen a work obligation, the harder it is to perceive that it might be an enemy of more divine play: looking out the window at the sublime expanse of Lake Ontario, reading Evelyn Waugh, composing a sonnet. The train is going very fast now, and my little keyboard is jerking around, reflecting my mental agitation on this point. I have to do a lot of backspacing. And no, I have no actual talent for sonnets.
At this point, we return with renewed urgency to the political aspect of the question of leisure and work. Everyone from Plato and Thomas More to H.G. Wells and Barack Obama has given thought to the question of the fair distribution of labor and fun within a society. This comes with an immediate risk: Too often, the “realist” rap against any such scheme of imagined distributive justice, which might easily entail state intervention concerning who does what and who gets what, is that the predicted results depend on altered human nature, are excessively costly, or are otherwise unworkable. The deadly charge of utopianism always lies ready to hand.
In a much-quoted passage, Marx paints an endearingly bucolic picture of life in a classless world: “In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner, just as I have a mind, without ever becoming a hunter, fisherman, shepherd or critic.” Charles Fourier was even more effusive, describing a system of self-organizing phalansteries, or cells, where anarchist collectives would live in peace, engage in singing contests—the ideal-society version of band camp—and eventually turn the oceans to lemonade.
Veblen, after his fashion a sharp critic of capitalism but always more cynical than the socialist dreamers, demonstrated how minute divisions of leisure time could be used to demonstrate social superiority, no matter what the form or principle of social organization; but he was no more able than Marx to see how ingenious capitalist market forces could be in adapting to changing political environments. For instance, neither of them sensed what we now know all too well, namely that democratizing access to leisure would not change the essential problems of distributive justice. Being freed from drudgery only so that one may shop or be entertained by movies and sports, especially if this merely perpetuates the larger cycles of production and consumption, is hardly liberation. In fact, “leisure time” becomes here a version of the company store, where your hard-won scrip is forcibly swapped for the very things you are working to make.
Worse, on this model of leisure-as-consumption, the game immediately gets competitive, if not zero-sum. And this is not just a matter of the general sociological argument that says humans will always find ways to outdo each other when it comes to what they buy, wear, drive, or listen to. This argument is certainly valid; indeed, our basic primate need for position within hierarchies means that such competition literally ceases only in death. These points are illustrated with great acumen by Pierre Bourdieu, whose monumental study Distinction is the natural successor to The Theory of the Leisure Class. No, the issue can really only be broached using old-fashioned Marxist concepts such as surplus value and commodity fetishism.
It was the Situationist thinker Guy Debord who made the key move in this quarter. In his 1967 book, Society of the Spectacle, he posited the notion of temporal surplus value. Just as in classic Marxist surplus value, which is appropriated by owners from alienated workers who produce more than they consume, then converted into profit which is siphoned off into the owners’ pockets, temporal surplus value is enjoyed by the dominant class in the form of sumptuous feast days, tournaments, adventure, and war. Likewise, just as ordinary surplus value is eventually consumed by workers in the form of commodities which they acquire with accumulated purchasing power, so temporal surplus value is distributed in the form of leisure time that must be filled with the experiences supplied by the culture industry.
Like other critics of the same bent—Adorno, Horkheimer, Habermas—Debord calls these experiences “banal,” spectacles that meet the “pseudo-needs” which they at the same time create, in a cycle not unlike addiction. Such denunciations of consumption are a common refrain in the school of thought that my graduate students like to call Cranky Continental Cultural Conservatism, or C4; but there is nevertheless some enduring relevance to the analysis. Debord’s notion of the spectacle isn’t really about what is showing on the screens of the multiplex or being downloaded on the computers of the nation; indeed, there is actually nothing to rule out the possibility of playful, even critical artifacts appearing in those places—after all, where else? Spectacle is, rather, a matter of social relations, just as the commodity in general is, which need to be addressed precisely by those who are subject to them, which is everyone. “The spectacle is not a collection of images, but a social relation among people, mediated by images,” Debord says. And: “The spectacle is the other side of money: It is the general abstract equivalent of all commodities.”
We are no longer owners and workers, in short; we are, instead, voracious and mostly quite happy producers and consumers of images. Nowadays, the images are mostly of ourselves, circulated in an apparently endless frenzy of narcissistic exhibitionism and equally narcissistic voyeurism: my looking at your online images and personal details, consuming them, is somehow still about me. Debord was prescient about the role that technology would play in this general social movement. “Just when the mass of commodities slides toward puerility, the puerile itself becomes a special commodity; this is epitomized by the gadget. ... Reified man advertises the proof of his intimacy with the commodity. The fetishism of commodities reaches moments of fervent exaltation similar to the ecstasies of the convulsions and miracles of the old religious fetishism. The only use which remains here is the fundamental use of submission.”
It strikes me that this passage, with the possible exception of the last sentence, could have been plausibly recited by Steve Jobs at an Apple product unveiling. For Debord, the gadget, like the commodity more generally, is not a thing; it is a relation. As with all the technologies associated with the spectacle, it closes down human possibility under the guise of expanding it; it makes us less able to form real connections, to go off the grid of produced and consumed leisure time, and to find the drifting, endlessly recombining idler that might still lie within us. There is no salvation from the baseline responsibility of being here in the first place to be found in machines. In part, this is a simple matter of economics in the age of automation. “The technical equipment which objectively eliminates labor must at the same time preserve labor as a commodity,” Debord notes. “If the social labor (time) engaged by the society is not to diminish because of automation, ... then new jobs have to be created. Services, the tertiary sector, swell the ranks of the army of distribution.” This inescapable fact explains, at a stroke, the imperative logic of growth in the economy, the bizarre fetishizing of GDP as a measure of national health.
More profoundly, though, is a point that returns us to the original vision of a populace altogether freed from work by robots. To use a good example of critical consciousness emerging from within the production cycles of the culture industry, consider the Axiom, the passenger spaceship that figures in the 2008 animated film WALL-E. Here, robot labor has proved so successful, and so nonthreatening, that the human masters have been freed to indulge in nonstop indulgence of their desires. As a result, they have over generations grown morbidly obese, addicted to soft drinks and video games, their bones liquefied in the ship’s microgravity conditions. They exist, but they cannot be said to live.
The gravest danger of offloading work is not a robot uprising but a human downgrading. Work hones skills, challenges cognition, and, at its best, serves noble ends. It also makes the experience of genuine idling, in contrast to frenzied leisure time, even more valuable. Here, with only our own ends and desires to contemplate—what shall we do with this free time?—we come face to face with life’s ultimate question. To ask what is worth doing when nobody is telling us what to do, to wonder about how to spend our time, is to ask why are we here in the first place. Like so many of the standard philosophical questions, these ones butt up, however playfully, against the threshold of mortality.
And here, at the limit of life that idling alone brings into view in a nonthreatening way, we find another kind of nested logic. Call it the two-step law of life. Rule No. 1 is tomorrow we die; and Rule No. 2 is nobody, not even the most helpful robot, can change Rule No. 1. Enjoy!