UPDATED with Russell rebuttal WASHINGTON: We now live in a world where a Campaign To Stop Killer Robots is a deadly serious thing, officially endorsed by 26 national governments. (28, if you count Palestine and the Vatican). China is on that list, albeit with a huge asterisk: It wants to ban only the use of “lethal autonomous weapons systems,” not research, development and production. (After all, Beijing has an enormously ambitious plan to dominate AI by 2030.)

[Click here to read the entire series: The ‘Killer Robots’ Debate]

The US is not on the list of countries who want to ban killer robots. Nor is Russia. Nor is the UK, nor are any of our leading allies. Should we be?

Robert Work

There’s a strong argument that the US has already sworn off “killer robots.” Civilian officials and military leaders alike have said for years the US will always keep a “human in the loop” for any use of lethal force. There is a waiver provision in Pentagon policy, but no one’s ever used it.

Plus arms control is a hard sell in Washington these days. The US has finally abandoned the landmark INF Treaty after years of futilely complaining about well documented Russian violations.

“The Department of Defense has no intention…. to go after what are now called ‘lethal autonomous weapons systems,'” says Bob Work, who as deputy secretary of defense under Obama did more than any other person to push the Pentagon to embrace AI. “What commander would want that?”

But there’s a contradiction here, argues Stuart Russell, a Berkeley AI scientist and activist. “At present, the US is opposed to discussions on a Lethal Autonomous Weapons System treaty, while officially claiming not to be developing LAWS,” Russell told me. “This amounts to military suicide,” he argued: If you won’t build a new weapon yourself, why leave potential adversaries free to pursue it?

The cynical answer, of course is that a ban would be so difficult to enforce it would only create a sense of false security. But might it have real benefits?

XQ-58A Valkyrie “loyal wingman” drone, built by Kratos for Air Force Research Lab

Benefits of a Ban?

Russell emphasizes he doesn’t oppose all military uses of artificial intelligence, only AI that can kill without a human authorizing each attack. And if the US did agree to a ban on lethal artificial intelligence, it would make it much easier for computer scientists and engineers to work with their Pentagon on those other kinds of military AI, Russell argued, just as the Biological Weapons Convention let biologists develop defenses against germ warfare without fear their work would be perverted for offense.

(In the US, that is. The Soviet Union massively violated the BWC, which had no inspection mechanism: just ask around about Bio Preparat and Ken Alibek).

Berkeley photo

Stuart Russell

“Having a ban in place would make it much easier to develop ATLAS-like technologies that can protect soldiers’ lives,” Russell said, citing an Army program to use AI to assist in aiming and targeting (but not firing) weapons. “Quite possibly there would have been much less pushback against Project Maven at Google, because researchers would have some assurance that the technology would not be used in autonomous weapons. There would be a big steel gate closing off the slippery slope.”

Certainly, US civilian and military leaders insist, over and over, they want a “human in the loop” at all times for reasons both ethical and tactical.

“The last thing I want is you to go away from this thinking this is all about technology,” Work said during a speech on his AI push back in 2015. “The number one advantage we have is the people in uniform, in our civilian work force, in our defense industrial base, and the contractors who support us.” If Russia or China decide to take their people out of the loop and rely on automation alone, Work argued, then we can beat them with our combination of humans plus machines — creativity and calculation, intuition and precision.

Then there’s the ethical aspect — which also ties to the military’s deep cultural need to control the chaos of battle as much as possible. “The commander who uses a lethal autonomous weapons system that chooses its own target [is] delegating for his culpability for a law-of-war violation to a free-willed machine,” Work said at CNAS. “I’ve never talked to any commander in the West who’s said, ‘hey, that’s a real good idea.’”

So if the US military doesn’t really want lethal AI, should we ban it? Well, one former defense official told me, that depends on what kind of AI you mean. What Work and other national security insiders are objecting to is an AI that can choose and revise its own objectives — in rough terms, what scientists mean when they talk about artificial general intelligence. US military leaders are not objecting to so-called narrow AI that can only perform certain pre-defined tasks: shoot down incoming missiles (which the Navy’s Aegis, not even an AI, can do already), for example, or hear a rifle fire, instantly calculate a return trajectory, and kill the sniper before he gets off a second shot.

Slaughterbots screencap

A drone’s eye view of the target in the anti-lethal AI video “Slaughterbots”

Even a swarm of drones that hunt and kill specific individuals — one of Russell’s nightmare scenarios — would count as narrow AI, at least as long as human beings drew up the target list. (An AI that decides for itself who needs to die, perhaps based on a big-data analysis of terror suspects, might still not qualify as general AI, but it would give most generals a heart attack). And a drone that can recognize a “high value target” on sight, then kill him with three grams of explosives to the face, may be less likely to kill civilians by accident than human beings watching a long-distance video feed, then launching a Hellfire with a 20-pound warhead.

UPDATE “This strategy of saying it’s not really autonomous unless it’s conscious/superhuman/mysterious is just a deflection tactic,” Russell told me after the original version of this article appeared. A “narrow” AI, he argued, is quite capable of carrying out genocidal orders — like killing every light/dark-skinned adult male — or making catastrophic errors — like mistaking children for adults — and repeating them with mindless efficiency long after a human would have starting asking questions.

“I don’t think we should declare we will never use a lethal artificial narrow intelligence weapon, designed for specific battlefield missions,” the former defense official said. As for general AI, “we should debate whether or not we should pursue such a weapon, and if the answer is no, we should state it as policy.”

What about an international treaty banning lethal AI, which would have the force of law? “Such a treaty would be very difficult to monitor and enforce,” the official said. “That doesn’t mean an international treaty should be ruled out.”

OPCW photo

Inspectors from the Organisation for the Prohibition of Chemical Weapons (OPCW), a potential model for the monitoring of military AI.

The Enforcement Problem

“Autonomous weapons have all of the features that make arms control hard,” said Paul Scharre, a former Army Ranger, now at CNAS, who worked on the current Pentagon policy. International inspectors can’t tell by looking at an unmanned tank, plane, or warship whether it’s programmed to ask a human for permission before opening fire. Even if they get to see the actual code — a security breach few countries would allow — “there’s nothing to stop you from upgrading the software as soon as the inspectors leave,” he told me.

Paul Scharre

“Having said that,” Scharre continued, “I think that the kind of arms control that Stuart Russell is advocating for is actually more feasible.” If someone’s building vast swarms of lethal mini-drones, you don’t have to see the code to know they have to be fully autonomous: There’s no practical way, Scharre told me, for humans to review and approve “a million targets.”

Conversely, such mini-drones are only truly threatening in vast numbers. “A country or an individual… might be able to build a few hundred of these,” Scharre said, “but if you’re going to build millions of them, there’s no way to hide that.”

So how would you find them? The best model is probably the Chemical Weapons Convention, which, unlike many other treaties — the Biological Weapons Convention, the landmine ban, and so on — has a robust enforcement mechanism.

The scope of the problem is similar. Lethal chemicals like chlorine and phosgene are widely used in legitimate industry, so you can’t ban them outright any more than mini-drones; they’re relatively easy to turn into weapons, again like drones; and yet only rogue states like Syria and Iraq have used them since the end of World War I. Much of the reason militaries abandoned poison gas is that a weapon that blows with the wind is hard to control — yet another similarity with AI, since even “narrow” machine-learning algorithms modify themselves in ways beyond human understanding. But there is also a robust monitoring regime, run by the Organisation to Prevent Chemical Weapons, which has about 250 inspectors who can rapidly respond to reported violations.

Such “challenge inspections” are a crucial tool, said Irakli Beridze, a Georgian-born veteran of both OPCW and the UNICRI chem, bio, radiological, & nuclear program — with service in Afghanistan, Iraq, Libya, and Syria — who now runs the Centre for AI & Robotics at UNICRI, the United Nations Interregional Crime and Justice Research Institute. (Beridze emphasized he was only expressing his personal opinion as an expert, not as a UN official).

Mini-drone production would be easier to hide than chemical plants — for one thing, it doesn’t stink like a lot of toxic chemicals — but investigative techniques have advanced since the CWC entered into force in 1997. It might even be possible, Beridze said, to set an AI to catch an AI: use artificial intelligence to crunch big data — social media or parts orders, for example — and correlate subtle clues no human inspector could catch.

UN Photo

Irakli Beridze

Robust inspections, however, are only one part of the solution, he told me. Countries need not only to sign the treaty but use their own intelligence agencies and domestic law enforcement to watch for violations. And, after initial reluctance in the private sector, “buy-in and participation of the chemical industry… was absolutely essential,” he said. “Otherwise this treaty would not work.”

Once compliance became a norm in the chemical industry, in large part because of the moral stigma that attached to chemical weapons, it became much harder to produce poison gas in militarily significant amounts. Given widespread anxiety in the tech community about lethal AI, it should be possible to reach a similar consensus among drone manufacturers — eventually. Getting private industry, law enforcement, and national governments on board, even simply making them aware of the problem, would take years.

“We don’t have too much time,” Beridze warned. In a few years, “we will have a widespread technology where criminals can use small drones [for] mass terrorist attacks, assassinations, contract killing, you name it.”

 

[Click here to read the entire series: The ‘Killer Robots’ Debate]