The history of artificial intelligence is one of fiction made real. In 1843, Charles Babbage and Ada Lovelace (Lord Byron’s daughter) began discussing the possibility of a mechanical thinking machine. In the 1950s, computational programs that could play chess and solve word puzzles, along with the popularity of science-fiction novels, gave birth to wildly enthusiastic (and unrealistic) expectations that robots would soon take over most (if not all) human tasks. In 2012, South Korea started using robots as prison guards, partially because robots would be unlikely to accept bribes (the country already employed robots as teachers and border soldiers).
In 1981, however, a more disturbing milestone took place: Kenji Urada, a Japanese factory worker, became the first confirmed case of a human being killed by a robot. The 37-year-old Urada was performing maintenance at a Kawasaki plant and had forgotten to power down the robot he was working on. When the robot’s powerful hydraulic arm continued to go about its task, Urada was pushed into nearby machinery and killed. The seemingly simple question of who was criminally liable for Urada’s death turns out to be anything but straightforward.
In When Robots Kill: Artificial Intelligence Under Criminal Law (forthcoming from Northeastern University Press), Gabriel Hallevy explores the legal consequences of the rise of robots. While the field of “robo-ethics” has a storied history (ranging from Isaac Asimov’s “Three Laws of Robotics” to the 2004 Fukuoka “World Robot Declaration”), scholars of robots and criminal law operate with little apparent precedent. The question of criminal liability, distinct from that of moral accountability, rests largely on establishing that an entity has awareness. Do robots have a level of cognition and volition that constitutes the legal potential for criminal liability?
Hallevy, a professor of criminal law at Ono Academic College, in Israel, thinks so; he’s ready to put robots on trial.
While some legal scholars advocate drafting new “Robot Law” to give artificially intelligent robots unique legal standing, Hallevy thinks current criminal law is adequate. His reason? As technology comes closer to creating machina sapiens (advanced, information-processing, decision-making robots), it comes closer to imitating the human mind, making criminal law designed for people more applicable to robots.
And while it seems strange to apply human law to nonhuman beings (it “may sound radical,” Hallevy allows), precedent does exist. In fact, since the 1700s, Western nations have applied criminal law to corporations. While corporations, like robots, lack spirit, soul, and physical bodies, Hallevy argues that defining them as having the potential for criminal liability protects society. “I don’t think the story of corporations and the story of robots is that different,” he says.
Of course, as is the case for corporations, some aspects of criminal law will not apply to robot offenders. When constructing a defense, imagine a lawyer pleading that the robot defendant was insane or intoxicated. Still, When Robots Kill finds that some parallels exist, like a military robot that malfunctions when hackers infect it with a virus, or a robot that goes haywire and commits a crime when a malicious gas is sprayed on it.
Sentencing poses another logistical and theoretical challenge. First of all, does it even make sense to punish a machine? In modern legal practice, capital punishment, imprisonment, and probation are directed at human beings largely for reasons of rehabilitation or incapacitation. In the context of artificial intelligence, Hallevy argues, similar justifications apply. Robots should, in appropriate situations, stand trial. An offending robot could be sentenced to public service, in which it would perform its task for the state for a certain number of hours. Disassembling the robot could stand in for probation, and upgrading the hardware or software might serve as rehabilitation. Whether for people, corporations, or robots, Hallevy says, “The law should be applied equally.”
All of this may sound like the stuff of Hollywood, and to some extent, it is. “The idea for the book came when I saw I, Robot with Will Smith,” the author says. “It was a few months after I saw the movie A.I., and I thought to myself, ‘AI is capable of doing very good things, but what about the possibility for criminal offenses?’” Working with his editor, Phyllis Deutsch, Hallevy began to write a book not only for legal scholars working in the subfield of artificial intelligence but also for science enthusiasts and undergraduates. Hallevy gives much credit to his editor, noting that his proposed title for the book, “A General Theory for the Criminal Liability of Artificial Intelligence Entities,” was wisely passed over.
Still, dangers persist for the serious scholar who delves too deeply into robo-futuristic themes. Although his work is frequently cited in Israeli Supreme Court cases, Hallevy worries that some legal scholars think he can’t “distinguish between science and science fiction.” While he thinks the intersection of robotics and the law is home to promising research, some book topics (like that of David Levy’s Love and Sex With Robots) might give sensational impressions that aren’t always warranted.
But Hallevy remains committed to what this project has taught him: “Either we impose criminal liability on AI entities, or we must change the basic definition of criminal liability as it developed over thousands of years, and abandon the traditional understandings of criminal liability.”