Marine Corps photo

Marines work on armed MUTT robot in MIX-16 experiment.

WASHINGTON: The debate over the use of artificial intelligence in warfare is heating up, with Google employees protesting their company’s Pentagon contracts, South Koreans protesting university cooperation with their military, and international experts gathering next week to debate whether to pursue a treaty limiting military AI. While countries like Russia and China are investing heavily in artificial intelligence without restraints, the US and allied militaries like South Korea face a rising tide of opposition.

Rule of Law

The international conclave has the kind of name you only encounter when dealing with the United Nations and related organizations: the Convention on Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems (CCWGGELAWS?). Those experts meet next week and in August. Note they have a new acronym for armed AI systems: LAWS.

How is all this arcana relevant to the US military? Treaties are the bedrock of international relations, specific agreements that help define the relations between states. Idealists — and those who want to bind their enemy’s conduct — often believe treaties are the best mechanism for governing what is allowed in warfare.

Notre Dame photo

Mary Ellen O’Connell

Mary Ellen O’Connell, a law professor at Notre Dame, argued with quiet passion for restraints on AI, comparing it to nuclear weapons and other weapons of mass destruction. What happens, she asked at a Brookings Institution forum today, when AI is mated with nanotechnology or other advanced technologies? How do humans ensure they are the final decision makers? Given all that, she predicts “we are going to see some kind of limitation on AI” when the governments that belong to the Convention on Conventional Weapons meet in November to consider what the experts have come up with.

To get an idea where many of those experts are coming from, take a look at this 2016 report by the International Committee of the Red Cross:

“The development of autonomous weapon systems — that is, weapons that are capable of independently selecting and attacking targets without human intervention — raises the prospect of the loss of human control over weapons and the use of force.”

O’Connell raised this issue, implying that the lack of personal accountability might make AI impermissible under international law.

Former Defense Secretary Ash Carter pledged several times that the United States would always keep a human in or on the loop of any system designed to kill other humans. As far as we know, that is still US policy.

Duke University photo

Charles Dunlap

A very different perspective on the issue was offered by retired Air Force Maj. Gen. Charlie Dunlap,  executive director of Duke Law School’s Center on Law, Ethics and National Security and former Deputy Judge Advocate General. He cautioned against trying to ban specific technologies, noting that there’s an international ban on the use of lasers to blind people in combat — but there is no ban against using a laser to incinerate someone. The better approach is to “strictly comply with the laws of war, rather than try to ban certain types of technology,” he argued.

As a public service, let’s remind our readers of one of the first efforts to deal with this issue, Isaac Asimov’s “Three Laws of Robotics.”

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course, Asimov later added this, known as the Zeroth Law: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” If an AI, in the service of a government, is killing enemy humans, it would appear to violate Asimov’s first law. But the actual laws of war, the Geneva Conventions, are clearly defined and do not ban intelligent systems from commanding and using weapons. If the AI is obeying all rules of war and can be destroyed or curtailed should it begin violating those rules, one can argue that an AI is less likely than a human to break down under the stress of combat and violate the rules of war.

Google photo

The Google Car pioneers many of the same technologies needed for autonomous military vehicles.

Google Revolt

Meanwhile, thousands of engineers, researchers, and scientists from Seoul to Silicon Valley are in open revolt against the marriage of artificial intelligence technologies and the military, and have targeted Google and a top South Korean research university for projects they have kicked off with their respective militaries.

The issue of the militarization of AI has been simmering for years, but recent, well-publicized advances by the Chinese and Russians have pushed Western military leaders to scramble to keep pace by pumping tens of millions of dollars into collaborations with civilian and academic institutions. The projects, and the headlines they’re generating, have dragged into the open difficult issues that had been simmering for some time over robotics research and the exploding arms race in AI and autonomous technologies.

A group of about 3,100 Google engineers signed a petition protesting the company’s involvement with Project Maven, the offshoot of the Pentagon’s Algorithmic Warfare taskforce, which uses AI to collect and analyze drone footage much more quickly and thoroughly than a human can to help military commanders.

“We believe that Google should not be in the business of war,” said the letter, addressed to Sundar Pichai, the company’s chief executive, that was first reported by the New York Times. The letter also demands that the project be cancelled and the company “draft, publicize, and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”

Hanwa Group photo

Armored vehicles produced by South Korea’s Hanwha Group

A second letter emerged Wednesday. This one was aimed at a top South Korean university which had kicked off a research project with the top Korean defense company. The missive, signed by more than 50 AI researchers and scientists from 30 different countries, lambasted South Korea’s KAIST university for opening a lab in conjunction with Hanwha Systems, South Korea’s leading arms manufacturer.

The lab, dubbed the “Research Center for the Convergence of National Defense and Artificial Intelligence,” is planned as a forum for academia to partner with the South Korean military to explore how AI can bolster national security. The university’s website said that it’s looking to develop “AI-based command and decision systems, composite navigation algorithms for mega-scale unmanned undersea vehicles, AI-based smart aircraft training systems, and AI-based smart object tracking and recognition technology.”

Paul Scharre

The university’s leaders have said they have no intention of developing autonomous weapons that lack human control, but the protesters said they will not visit or work with the world-renowned institution until it pledges not to build autonomous weapons.

As for the US effort, a Pentagon spokesperson told Breaking Defense that Maven “is fully governed by, and complies with” U.S. law and the laws of armed conflict and is “designed to ensure human involvement to the maximum extent possible in the employment of weapon systems.”

“I think it’s good that we’re having a conversation about this,” said Paul Scharre, director of the Technology and National Security Program at the Center for a New American Security. As far as Maven goes, he said, “I think this application is benign,” since it mostly uses open-source technologies, but he understands that engineers are concerned about the “slippery slope” of the greater military use of AI.

“Researchers have for decades been able to do their AI work and its applications have been very theoretical,” Scharre said, “but some of the advances we’ve seen in machine learning have been making this stuff very real, including for military applications.” No wonder, then, that legal scholars and software programmers alike have starting wrestling in earnest with the implications of armed AI.