Arms Control Groups Challenge Planned U.S. Military Reliance on Robot-Controlled Weapons

Thursday, April 14, 2016
Artist's concept of semi-autonomous LRASM missile speeding toward target (graphic: Lockheed Martin)




By John Markoff, New York Times


Two international arms control groups Monday issued a report that called for maintaining human control over a new generation of weapons that are increasingly capable of targeting and attacking without the involvement of people.


The report (pdf), which came from Human Rights Watch and the Harvard Law School International Human Rights Clinic at the opening of a weeklong United Nations meeting on autonomous weapons in Geneva, potentially challenges an emerging U.S. military strategy that will count on technology advantages and increasingly depend on weapons systems that blend humans and machines.


That strategy has been described as the Third Offset strategy and it seeks to exploit technologies to maintain U.S. military superiority. Pentagon officials have recently stated that the new technologies — and particularly artificial intelligence software — will help, rather than replace, human soldiers who must make killing decisions.


“Machines have long served as instruments of war, but historically humans have always dictated how they are used,” the report, titled “Killer Robots and the Concept of Meaningful Human Control,” said.


“The evolution of technology has the potential to change that reality, and the implications are profoundly disturbing,” the report added.


While some have argued that in the future, autonomous weapons might be able to better adhere to the laws of war than humans, an international debate is now emerging over whether it is possible to limit the evolution of weapons that make killing decisions without human involvement.


Current U.S. military guidelines, published in 2012, call for commanders and weapons operators to exercise “appropriate levels of human judgment” over the use of force. The guidelines do not completely prohibit autonomous weapons, but require that high-level Pentagon officials authorize them. They draw a line between semiautonomous weapons, whose targets are chosen by a human operator, and fully autonomous weapons that can hunt and engage targets without intervention.


New weapons that will enter the U.S. arsenal as early as 2018 may make the distinction a vital one. One example is a missile, known as Long Range Anti-Ship Missile, or LRASM, which was initially designed by the Defense Advanced Research Projects Agency and will be manufactured by Lockheed Martin Corp. This year, the Pentagon asked Congress to authorize $927 million over the next five years for the system.


The missile is being developed in large part because of concerns that U.S. aircraft carriers will be required to operate farther from China because of its growing military power.


Yet the missile has raised concerns among critics because it is designed to be launched by a human operator and then fly to a targeted ship out of human contact and make final targeting decisions autonomously.


“I would argue that LRASM is intended primarily to threaten China and Russia and is only likely to be used in the opening shots of a nuclear war that would quite likely destroy our civilization and kill a large fraction, or most, or nearly all human beings,” said Mark A. Gubrud, a physicist and a member of the International Committee for Robot Arms Control, an activist group working for the prohibition of autonomous weapons.


The ability to recall a weapon may be a crucial point in any ban on autonomous weapons, said Bonnie Docherty, the author of the report and a lecturer on law and senior clinical instructor at the International Human Rights Clinic at Harvard Law School.


Weapons specialists said the exact capabilities of systems like LRASM are often protected as classified information.


“We urge states to provide more information on specific technology so the international community can better judge what type and level of control should be required,” Docherty said.


The United States is not the only nation pursuing highly automated weapons. Britain, Israel and Norway have deployed missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control.


The most recent U.S. military budget for fiscal year 2017 calls for spending $3 billion on what it describes as “human machine combat teaming.” As machines become more capable and the pace of warfare quickens because of automation, many weapons specialists think that it will be challenging to keep humans in control.


At the same time, some nations are now calling for some kind of international agreement that limits the weapons.


“There seems to be a broad consensus that, at some level, humans should be involved in lethal force,” said Paul Scharre, a senior fellow at the Center for New American Security in Washington and one of the authors of the 2012 Pentagon guidelines.


To Learn More:

Killer Robots and the Concept of Meaningful Human Control (by Bonnie Docherty, Human Rights Watch and International Human Rights Cinic) (pdf)

Report Warns that Autonomous Weapons in Action Could be Rendered Uncontrollable (by John Markoff, New York Times)

U.S. and U.K. Accused of Impeding Progress on U.N. “Killer Robot” Ban (by Noel Brinkerhoff, AllGov)

U.N. Convenes to Discuss Danger of Killer Robots while Nobel Laureates Urge They Be Banned (by Noel Brinkerhoff, AllGov)

U.N. Calls for Global Ban on Autonomous Killer Robots (by Noel Brinkerhoff, AllGov)


Leave a comment