> Skip to content
FEATURED:
  • The Evolution of Race in Admissions
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
  • News
  • Advice
  • The Review
  • Data
  • Current Issue
  • Virtual Events
  • Store
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
    • Featured Products
    • Reports
    • Data
    • Collections
    • Back Issues
  • Jobs
    • Find a Job
    • Post a Job
    • Career Resources
    • Find a Job
    • Post a Job
    • Career Resources
Sign In
ADVERTISEMENT
Research
  • Twitter
  • LinkedIn
  • Show more sharing options
Share
  • Twitter
  • LinkedIn
  • Facebook
  • Email
  • Copy Link URLCopied!
  • Print

Robots at War: Scholars Debate the Ethical Issues

By  Don Troop
September 10, 2012
The X-47B, an unmanned military aircraft
Alan Radecki, Northrop Grumman
The X-47B, an unmanned military aircraft

The dawn of the 21st century has been called the decade of the drone. Unmanned aerial vehicles, remotely operated by pilots in the United States, rain Hellfire missiles on suspected insurgents in South Asia and the Middle East.

Now a small group of scholars is grappling with what some believe could be the next generation of weaponry: lethal autonomous robots. At the center of the debate is Ronald C. Arkin, a Georgia Tech professor who has hypothesized lethal weapons systems that are ethically superior to human soldiers on the battlefield. A professor of robotics and ethics, he has devised algorithms for an “ethical governor” that he says could one day guide an aerial drone or ground robot to either shoot or hold its fire in accordance with internationally agreed-upon rules of war.

We’re sorry. Something went wrong.

We are unable to fully display the content of this page.

The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.

Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don't already have one, or subscribe.

If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com

The dawn of the 21st century has been called the decade of the drone. Unmanned aerial vehicles, remotely operated by pilots in the United States, rain Hellfire missiles on suspected insurgents in South Asia and the Middle East.

Now a small group of scholars is grappling with what some believe could be the next generation of weaponry: lethal autonomous robots. At the center of the debate is Ronald C. Arkin, a Georgia Tech professor who has hypothesized lethal weapons systems that are ethically superior to human soldiers on the battlefield. A professor of robotics and ethics, he has devised algorithms for an “ethical governor” that he says could one day guide an aerial drone or ground robot to either shoot or hold its fire in accordance with internationally agreed-upon rules of war.

But some scholars have dismissed Mr. Arkin’s ethical governor as “vaporware,” arguing that current technology is nowhere near the level of complexity that would be needed for a military robotic system to make life-and-death ethical judgments. Clouding the debate is that any mention of lethal robots floods the minds of ordinary observers with Terminator-like imagery, creating expectations that are unreasonable and counterproductive.

If there is any point of agreement between Mr. Arkin and his critics, it is this: Lethal autonomous systems are already inching their way into the battle space, and the time to discuss them is now. The difference is that while Mr. Arkin wants such conversations to result in a plan for research and governance of these weapons, his most ardent opponents want them banned outright, before they contribute to what one calls “the juggernaut of developing more and more advanced weaponry.”

Mr. Arkin, who has more than a quarter-century of experience performing robotics research for the military, says his driving concern is the safety of noncombatants.

ADVERTISEMENT

“I am not a proponent of lethal autonomous systems,” he says in the weary tone of a man who has heard the accusation before. “I am a proponent of when they arrive into the battle space, which I feel they inevitably will, that they arrive in a controlled and guided manner. Someone has to take responsibility for making sure that these systems ... work properly. I am not like my critics, who throw up their arms and cry, ‘Frankenstein! Frankenstein!’”

Nothing would make him happier than for weapons development to be rendered obsolete, says Mr. Arkin. “Unfortunately, I can’t see how we can stop this innate tendency of humanity to keep killing each other on the battlefield.”

Thrill of Discovery

The early days of robotics research were frustrating for scientists and engineers because of the machines’ sensory and computational limitations. Things started to get interesting, Mr. Arkin recalls, as researchers made gains in areas like autonomous pathfinding algorithms, sensing technology, and sensor processing.

“I was very enthralled with the thrill of discovery and the drive for research and not as much paying attention to the consequences of, ‘If we answer these questions, what’s going to happen?’” he says. What was going to happen soon became apparent: Robotics started moving out of the labs and into the military-industrial complex, and Mr. Arkin began to worry that the systems could eventually be retooled as weaponized “killing machines fully capable of taking human life, perhaps indiscriminately.”

His “tipping point” came in 2005 at a Department of Defense workshop, where he was shown a grainy, black-and-white video recorded by a gun camera on a U.S. Apache attack helicopter hovering above a roadside in Iraq.

ADVERTISEMENT

In the cross hairs, three probable insurgents appeared as thermal images moving between a pair of trucks and a field, where one of them tossed an apparent weapon. “Smoke him,” a commander’s voice ordered. Seconds later, automatic fire from a helicopter chain gun cut apart first one man, then another. The third took shelter under the larger of the trucks. Mr. Arkin recorded the rest of the dialogue in his book Governing Lethal Behavior in Autonomous Robots (CRC Press, 2009):

Voice 1: Want me to take the other truck out?

Voice 2: Roger ... Wait for move by the truck.

ADVERTISEMENT

Voice 1: Movement right there ... Roger, he’s wounded.

Voice 2: [No hesitation] Hit him.

Voice 1: Targeting the truck.

ADVERTISEMENT

Voice 2: Hit the truck and him. Go forward of it and hit him.

[Pilot retargets for wounded man.]

ADVERTISEMENT

Voice 1: Roger.

[Audible weapon discharge—wounded man has been killed.]

ADVERTISEMENT

“As I see it,” Mr. Arkin wrote, “the human officer ordered the execution of a wounded man,” possibly violating the military rule that forbids the killing of someone who is hors de combat or “outside the fight.”

The “gruesome” video set him to wondering about a potential humanitarian side to his work: Could a drone, operating independently of human control but programmed to follow the Geneva Conventions and other international laws of war, “have refused to shoot upon an already wounded and effectively neutralized target?” It was a tantalizing but controversial idea.

Robots Join the War

The history of military robotics began with the Serbian-American inventor Nikola Tesla, whose pioneering work in electrical engineering led to the alternating-current (AC) systems that still power homes today. In the book Wired for War: The Robotics Revolution and Conflict in the 21st Century (Penguin Press, 2009), Peter W. Singer describes how a U.S. government official laughed at Tesla in 1898 when he tried to sell the idea of radio-controlled torpedoes for the military.

Germany would be the first to find a military use for Tesla’s wireless-radio technology, ramming a British vessel with an explosive-laden motorboat during World War I, writes Mr. Singer, director of the Brookings Institution’s 21st Century Defense Initiative. In World War II, Nazi forces dropped the first remotely piloted drone and steered it to its target using radio controls.

During the Vietnam War, the U.S. military’s Firefly drone flew 3,435 reconnaissance missions over Southeast Asia, and in 1991 laser-guided bombs and missiles known as smart bombs became the stars of the Persian Gulf War. A retired Air Force officer told Mr. Singer that “the magic moment” for drone warfare came in 1995, when unmanned systems were integrated with Global Positioning System military satellites.

ADVERTISEMENT

Five years later, Sen. John Warner, the Virginia Republican who led the Senate Armed Services Committee, muscled a requirement into the National Defense Authorization Act of 2001 specifying that one-third of all attack aircraft should be unmanned by 2010 and one-third of all ground combat vehicles driverless by 2015.

“His insistence on pushing unmanned systems to the next level had nothing to do with what was possible with robotics at the time,” Mr. Singer wrote of Senator Warner. Rather, the lawmaker was concerned that the public’s growing distaste for American war casualties would neuter U.S. foreign policy, and that the military needed a technological draw to entice young people to enlist.

The terrorist attacks of September 11, 2001, spurred still more expansion in military spending, especially for robotics, and the Pentagon’s drone fleet has swelled from 50 to more than 7,000. The changes came so swiftly that George R. Lucas Jr., a professor of ethics and public policy at the U.S. Naval Postgraduate School, says he and others had to embark on a crash course in unmanned technology. “A lot of us really found ourselves caught off guard and really behind the eight ball about this stuff,” says Mr. Lucas, who holds a joint appointment at the U.S. Naval Academy.

Today the United States has two counterterrorism drone programs, according to the American Security Project, a bipartisan public-policy organization that focuses on national-security issues. The Pentagon and the Joint Special Operations Command openly operate one program in Afghanistan, Iraq, and Libya, and the Central Intelligence Agency operates another covertly in Pakistan, Somalia, and Yemen.

Based on figures from 16 news outlets in the Middle East, South Asia, and the United States, the New America Foundation estimates that remotely piloted U.S. drones in Pakistan have killed 1,873 to 3,171 people since 2004. Up to 14 percent of the dead have been classified as either civilian or unknown, though the number of noncombatant casualties has reportedly plummeted to nearly zero this year.

ADVERTISEMENT

Americans approve of the drone campaign 62 percent to 28 percent, according to the Pew Research Center’s Global Attitudes Project. (The rest of the world is less enthusiastic; approval ratings among the other 20 nations in the survey ranged from a high of 44 percent in Britain to just 5 percent in Greece.)

A New York Times article in May about the Obama administration’s embrace of drone warfare quoted Dennis C. Blair, the president’s former director of national intelligence, as saying that the remotely piloted campaign was “the politically advantageous thing to do—low cost, no U.S. casualties, gives the appearance of toughness. ... Any damage it does to the national interest only shows up over the long term.”

‘Artificial Conscience’

A year after seeing the Apache helicopter video in 2005, Mr. Arkin, the Georgia Tech roboticist, won a three-year grant from the U.S. Army Research Office for a project with a stated goal of producing “an artificial conscience” to guide robots in the battlefield independent of human control. The project resulted in a decision-making architecture that Mr. Arkin says could potentially lead to ethically superior robotic warriors within as few as 10 to 20 years, assuming the program is given full financial support.

“I’m not talking about replacing war fighters one for one,” he says. “I’m talking about designing very narrow, very specific machines for certain tasks that will work alongside human war fighters to carry out particular types of operations that humans don’t do particularly well at, such as building-clearing operations.”

Rather than risking one’s own life to protect noncombatants who may or may not be behind a door, Mr. Arkin says, a soldier “might have a propensity to roll a grenade through there first ... and there may be women and children in that room.” A robot could enter the room and gauge the level of threat from up close, eliminating the risk to a soldier.

ADVERTISEMENT

Autonomous weapons bring other advantages, Mr. Arkin notes. The militants who have engaged U.S. forces in Iraq and Afghanistan lack the technological savvy of other potential enemies. “Imagine we are fighting a sophisticated enemy in complete war once again,” he suggests. An American pilot at a military base in Nevada is guiding a drone several thousand miles away when the enemy knocks out the communication link. “What do the drones do?” Mr. Arkin asks. “Do they circle? Do they go back? Or is authority going to be given to them to become autonomous?”

The scenario is not far-fetched. Researchers from the Radionavigation Laboratory of the University of Texas at Austin grabbed headlines in June when they managed to hijack an unmanned aerial vehicle in a “spoofing” test arranged by the Department of Homeland Security.

A paper published last year in The Columbia Science and Technology Law Review explored the ethical, policy, legal, and operational issues that surround lethal autonomous weapons. The authors were Mr. Arkin and 10 other scholars from the Consortium for Emerging Technologies, Military Operations, and National Security, or Cetmons—a collection of university ethics centers whose members meet regularly to discuss the complex issues raised by new military technologies. The paper calls the development of such weapons “inevitable and relatively imminent.”

Most scholars agree that the line separating existing autonomous weapons from their human-controlled forebears is a blurry one. Mr. Singer notes that the Aegis computer system, which America has used since the 1980s to defend naval vessels against air and missile attacks, has a range of settings that go from semiautomatic to “casualty.” In the final setting, the system simply does what is necessary to defend the ship.

“A lot of air-defense systems operate under this principle, where you have an incoming threat, and it’s coming in so fast that the system can automatically destroy it,” he says. A human can shut off the system even as it’s about to fire, but as Mr. Singer puts it, “You’ve got what I call mid-curse-word reaction time: ‘Oh, cr—.’ That’s about it.”

ADVERTISEMENT

Even when the system is in semiautomatic mode, the humans who are monitoring it sometimes trust the computer more than their own instincts, Mr. Singer notes. In 1988, during the Iran-Iraq war, that resulted in the shooting down of Iran Air Flight 655, a civilian aircraft with 290 people aboard. Although the jet was broadcasting a civilian signal, the Aegis computer system displayed an icon for an Iranian F-14 fighter.

“Not one of the 18 sailors and officers on the command crew was willing to challenge the computer’s wisdom,” Mr. Singer wrote in Wired for War. “They authorized it to fire.”

Other systems can be made fully autonomous, such as the mounted guns that South Korea uses to protect its border with North Korea.

According to Mr. Arkin, whether or not a system is autonomous depends on how you define it, which may depend on what discipline you’re in.

“When you speak to a philosopher, autonomy deals with moral agency and the ability to assume responsibility for decisions,” he says. “Most roboticists have a much simpler definition in that context. In the case of lethal autonomy, it’s the ability to pick out a target and engage that target without additional human intervention.”

ADVERTISEMENT

The ‘Illusion’ of Inevitability

The Cetmons paper says that now is the time to discuss an agenda for studying and regulating lethal autonomous systems, before the commercial momentum behind the technology becomes “too strong and entrenched.”

That is a worry shared by many, including Wendell Wallach, a scholar at Yale University’s Interdisciplinary Center for Bioethics. Mr. Arkin’s optimistic view of the potential capabilities of robotics, Mr. Wallach says, misleads unsophisticated observers—potentially including some policy makers—who are not aware of how vast a gulf separates existing technology and Arnold Schwarzenegger’s Terminator.

“The danger of Ron’s language is that it creates the illusion that moral robotic weaponry is inevitable and right around the corner, and therefore we shouldn’t be so concerned about the development of autonomous lethal weapons,” says Mr. Wallach, co-author of Moral Machines: Teaching Robots Right From Wrong (Oxford University Press, 2009).

Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield, in England, says that while the Geneva Conventions require that new weapons systems be tested during their development to ensure that they won’t inadvertently harm civilians, there is no such requirement for systems that are used for other purposes, like surveillance. That was the role of Predator drones until the terrorist attacks of 2001, after which the CIA and the Air Force equipped them with Hellfire missiles.

Since most unmanned systems can quickly be weaponized, Mr. Sharkey fears that is precisely what would happen if America suddenly found itself in a new war. “It’s called military necessity,” he says. “We’ve got this facility, and we’re engaged in a war. We’ll stick the weapons on.”

ADVERTISEMENT

Mr. Sharkey argues that lethal autonomous systems will never attain the proficiency of humans in following such “just war theory” cornerstones as distinction and proportionality. The principle of distinction establishes that active combatants are the only legitimate targets of attack. Civilians, including children and the elderly, are to be excluded, as are combatants who are wounded or have surrendered. When it is impossible to fully protect noncombatants, the principle of proportionality requires that any loss of life be proportional to the direct military advantage that one expects to gain.

Mr. Sharkey rejects both the level of complexity and the timeline that Mr. Arkin has proposed.

“From my knowledge of artificial intelligence, I know there is no possibility you could discriminate between a civilian and a combatant with a robot sensing system,” Mr. Sharkey says. “I couldn’t see it happening even within the next hundred years, really.”

In Wired for War, Mr. Singer lists several examples of combat situations that would stymie even the most experienced soldier, such as when a Somalian sniper covered himself with children to prevent being shot. How would a lethal autonomous robot respond in such a scenario?

Mr. Arkin responds that his critics are the ones who are exaggerating what he has hypothesized could be possible.

ADVERTISEMENT

“It is a restraint mechanism,” he says. “There is no high-level reasoning. There is no moral agency.”

In the case of the Somalian sniper the ethical robot would simply follow the laws of war and hold its fire, he says. The robot could also approach the sniper without fearing for its safety, as a human soldier would.

Nonetheless he acknowledges that lethal robots would not be perfect.

“They will make mistakes,” Mr. Arkin says. “But if they make fewer mistakes than human soldiers do, that translates into a saving of noncombatant lives.”

Mr. Lucas, the Naval Postgraduate School ethicist and Cetmons member, refers to Mr. Arkin as “the responsible extreme” because he is not pursuing the “relentless drive toward machine autonomy” that seems to compel some scientists. Where Mr. Lucas parts ways with his friend is over his “anthropomorphic” terminology.

ADVERTISEMENT

In a chapter of a book forthcoming from Oxford University Press, Killing by Remote Control, Mr. Lucas writes that Mr. Arkin and his colleagues “complicate the issues unnecessarily by invoking spurious concepts like machine ‘morality’ and proposing ‘ethical governors.’”

Lowering the Bar for War

In Mr. Singer’s view, the lure of lethal autonomous drones is the promise that we can conduct war without sending people into harm’s way and suffering the human and political consequences.

“But war has a wonderful way of paying you back,” he says. “You think you may be getting away with something, avoiding political consequences, but often it comes back to haunt you in some way, shape, or another.”

Mr. Sharkey is a founder of the International Committee for Robot Arms Control, a collection of scholars who support an international ban on autonomous lethal targeting. Mr. Wallach agrees with the goal, saying that the weapons are “not yet embedded” in the American defense arsenal, as unmanned aerial vehicles are. But he fears that arms-control negotiations would drag on, and that compliance would be difficult to verify.

Thus Mr. Wallach has advocated establishing as a first step an international humanitarian principle stating that machines should not be making “decisions” to kill humans. In June he began circulating a proposal that calls for an executive order against the development of “autonomous lethal-force-initiating systems.” He says that getting the president to declare that the United States will not create such weaponry could prompt NATO to jump on board as well.

ADVERTISEMENT

“It’s a very specific strategy on how to move forward,” Mr. Wallach says. “You establish under international humanitarian law the principle that this is not an appropriate form of warfare. It becomes comparable to biological weaponry, gas warfare, lasers on the battlefield, other things that have now been declared immoral, inappropriate in warfare.”

Braden R. Allenby, a professor of engineering, ethics, and law at Arizona State University and chairman of Cetmons, says people in the military are also skeptical of lethal autonomous robots “because they’re the ones who are going to get blamed for it” if something goes wrong.

Yet he says it’s clear that people in the defense establishment are studying such systems even if they are not planning to deploy them.

“If it’s your job to be concerned about the security of the United States, and that’s what the president has told you to do, then you’ve got to try to understand this stuff,” Mr. Allenby says. “Because if you don’t, and then China does, or Russia does, or India does, or Brazil does, then you haven’t done your job. You’ve failed.”

Corrections (September 13, 2012, 5:01 p.m.): This article originally stated that Iran Air Flight 655 was shot down when the Aegis air-defense system was set on “casualty,” an automatic mode. The article has been corrected to show that the decision to fire was made by human soldiers, based on misinformation from the system. The article also incorrectly named the institution where Braden R. Allenby is a professor. It is Arizona State University, not the University of Arizona. The article has been updated to reflect this correction.

ADVERTISEMENT

We welcome your thoughts and questions about this article. Please email the editors or submit a letter for publication.
Scholarship & Research
Don Troop
Don Troop joined The Chronicle in 1998, and he has worked as a copy editor, reporter, and assigning editor over the years.
ADVERTISEMENT
ADVERTISEMENT

Related Content

  • Opinion: Drones End War’s Easy Morality
  • Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
    Explore
    • Get Newsletters
    • Letters
    • Free Reports and Guides
    • Blogs
    • Virtual Events
    • Chronicle Store
    • Find a Job
  • The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
    The Chronicle
    • About Us
    • DEI Commitment Statement
    • Write for Us
    • Talk to Us
    • Work at The Chronicle
    • User Agreement
    • Privacy Policy
    • California Privacy Policy
    • Site Map
    • Accessibility Statement
  • Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
    Customer Assistance
    • Contact Us
    • Advertise With Us
    • Post a Job
    • Advertising Terms and Conditions
    • Reprints & Permissions
    • Do Not Sell My Personal Information
  • Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
    Subscribe
    • Individual Subscriptions
    • Institutional Subscriptions
    • Subscription & Account FAQ
    • Manage Newsletters
    • Manage Your Account
1255 23rd Street, N.W. Washington, D.C. 20037
© 2023 The Chronicle of Higher Education
  • twitter
  • instagram
  • youtube
  • facebook
  • linkedin