Report Warns that Autonomous Weapons in Action Could be Rendered Uncontrollable

Friday, March 11, 2016
Artist's concept of Long Range Anti-Ship Missile speeding toward target (graphic: Lockheed Martin)

By John Markoff, New York Times

 

A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that such weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries.

 

In recent years, low-cost sensors and new artificial intelligence technologies have made it increasingly practical to design weapons systems that make killing decisions without human intervention. The specter of so-called killer robots has touched off an international protest movement and a debate within the United Nations about limiting the development and deployment of such systems.

 

The new report was written by Paul Scharre, who directs a program on the future of warfare at the Center for a New American Security, a policy research group in Washington, D.C. From 2008 to 2013, Scharre worked in the office of the Secretary of Defense, where he helped establish U.S. policy on unmanned and autonomous weapons. He was one of the authors of a 2013 Defense Department directive that set military policy on the use of such systems.

 

In the report, titled “Autonomous Weapons and Operational Risk” (pdf), set to be published on Monday, Scharre warns about a range of real-world risks associated with weapons systems that are completely autonomous.

 

The report contrasts these completely automated systems, which have the ability to target and kill without human intervention, to weapons that keep humans “in the loop” in the process of selecting and engaging targets.

 

Scharre, who served as an Army Ranger in Iraq and Afghanistan, focuses on the potential types of failures that might occur in completely automated systems, as opposed to the way such weapons are intended to work. To underscore the military consequences of technological failures, the report enumerates a history of the types of failures that have occurred in military and commercial systems that are highly automated.

 

“Anyone who has ever been frustrated with an automated telephone call support helpline, an alarm clock mistakenly set to ‘p.m.’ instead of ‘a.m.,’ or any of the countless frustrations that come with interacting with computers, has experienced the problem of ‘brittleness’ that plagues automated systems,” Scharre writes.

 

His underlying point is that autonomous weapons systems will inevitably lack the flexibility that humans have to adapt to novel circumstances and that as a result killing machines will make mistakes that humans would presumably avoid.

 

Completely autonomous weapons are beginning to appear in military arsenals. For example, South Korea has deployed an automated sentry gun along the demilitarized zone with North Korea, and Israel operates a drone aircraft that will attack enemy radar systems when they are detected.

 

The U.S. military does not have advanced autonomous weapons in its arsenal. However, this year the Defense Department requested almost $1 billion to manufacture Lockheed Martin’s Long Range Anti-Ship Missile, which is described as a “semiautonomous” weapon by the definitions established by the Pentagon’s 2013 memorandum.

 

The missile is controversial because, although a human operator will initially select a target, it is designed to fly for several hundred miles while out of contact with the controller and then automatically identify and attack an enemy ship in an opposing fleet.

 

The Center for a New American Security report focuses on a range of unexpected behavior in highly computerized systems like system failures and bugs, as well as unanticipated interactions with the environment.

 

“On their first deployment to the Pacific, eight F-22 fighter jets experienced a Y2K-like total computer failure when crossing the international date line,” the report states. “All onboard computer systems shut down, and the result was nearly a catastrophic loss of the aircraft. While the existence of the international date line could clearly be anticipated, the interaction of the dateline with the software was not identified in testing.”

 

The lack of transparency in artificial intelligence technologies that are associated with most recent advances in machine vision and speech recognition systems is also cited as a source of potential catastrophic failures.

 

As an alternative to completely autonomous weapons, the report advocates what it describes as “Centaur Warfighting.” The term “centaur” has recently come to describe systems that tightly integrate humans and computers.

 

However, in a telephone interview Scharre acknowledged that simply having a human push the buttons in a weapons system is not enough.

 

“Having a person in the loop is not enough,” he said. “They can’t be just a cog in the loop. The human has to be actively engaged.”

 

To Learn More:

Autonomous Weapons and Operational Risk (by Paul Scharre, Center for a New American Security) (pdf)

U.S. and U.K. Accused of Impeding Progress on U.N. “Killer Robot” Ban (by Noel Brinkerhoff, AllGov)

U.N. Convenes to Discuss Danger of Killer Robots while Nobel Laureates Urge They Be Banned (by Noel Brinkerhoff, AllGov)

U.N. Calls for Global Ban on Autonomous Killer Robots (by Noel Brinkerhoff, AllGov)

Comments

balboa schwartz 8 years ago
Go back..go way back.1968, Star Trek, episode "The Ultimate Computer".Shades of things to come.
George Drexel 8 years ago
One of the worst things about automated weapons such as the israeli drone that will attack any radar system detected assuming that it is the enemy. What's to stop the enemy creating a device that sends out the type of electromagnetic signature that the Israeli drone is programmed to attack and sneak that transmitting device into Israel so that the Israeli drone attacks an Israeli target.

Leave a comment