WASHINGTON: 70-ton robotic battle tanks? Scary. Three grams of explosive on a mini-drone that knows your face? Also scary. Thousands of such drones? Millions? That’s potentially a strategic game-changer in a way that automating conventional military hardware is not.

[Click here to read the entire series: The ‘Killer Robots’ Debate]

“I’m not too worried about vast autonomous swarms of battle tanks,” Berkeley AI scientist and activist Stuart Russell told me.

M1 Abrams tank

You’re not? I’d reached out to Russell because of his criticism of the US Army’s ATLAS project to put Artificial Intelligence in armored vehicles, a system intended to assist human gunners that he argued could all too easily replace them altogether. Quartz.com headlined its story on ATLAS “The US Army wants to turn tanks into AI-powered killing machines.” Okay, so the US Army actually doesn’t want that at all — replacing loyal, well-trained soldiers with unproven technology justifiably gives generals the heebie-jeebies — but just the possibility of robot tanks got a lot of people pretty worried.

Russell, however, has bigger things to worry about — or rather, much, much smaller things.

Soldier with handheld quadcopter

“I think of autonomous tanks as mainly a weapon for war between major powers,” he said. Taking the humans out of an armored vehicle, fighter jet, or warship could make it more effective in combat, and, because you no longer need space and life-support for human crew, it can definitely make them smaller and cheaper. But automating conventional war machines doesn’t make them smaller and cheaper enough that governments can stockpile vast swarms of them in secret and smuggle them into an enemy capital, or that terrorists can build them in garages with 3D printers.

So what Russell really worries about is not robotic tanks — though he’d definitely prefer a world without them — but what happens when the technology is developed and the precedent is set.

Berkeley photo

Stuart Russell

“Given the cost of a new M1A2 around $9 million…there are far cheaper ways to flatten a city and/or kill all of its inhabitants,” Russell told me. “The problem with full autonomy is that it creates cheap, scalable weapons of mass destruction.”

It’s already possible to build assassin drones by combining off-the-shelf quadcopters, small amounts of homebrewed explosive, and the kind of facial-recognition technology Facebook uses to tag other people’s bad pictures of you.

“My UAV colleagues tell me they could build a weapon that could go into a building, find an individual, and kill them as a class project,” Russell said. “Skydio plus self-driving cars plus AlphaStar more or less covers it.” (Skydio’s a drone you can buy on Amazon; AlphaStar is a version of the DeepMind AI that beats humans at complex strategy games like Starcraft). In fact, he said, Switzerland’s domestic security agency, DDPS, “made some to see if they would work — and they do.”

Not only would they work, they’ve already been tried. ISIS has already used mini-drones as “flying IEDs,” and someone attempted to assassinate Venezuelan president Nicolàs Maduro with a pair of exploding drones.

A quadcopter that slipped through security to land on the White House lawn

Small Drones, Big Kills

Now what happens when you scale this up? Russell and fellow activists actually produced a video, Slaughterbots, in which swarms of mini-drones attack, among other groups, every member of Congress from a particular party. But that’s still thinking small.

Remember, once you’ve written the software, you can make infinite copies; lone cranks can make explosives; and mini-drones are getting cheaper by the day. Remember also that the Chinese government has personal information on some 22.1 million federal employees, contractors, and their family members from the Office of Personnel Management breach two years ago. Now imagine one out of every thousand shipping containers imported from China is actually full of mini-drones programmed to go to those addresses and explode in the face of the first person to leave the house. Imagine they do this the day before China invades Taiwan. How effectively would the US government react?

video screencap

A quadcopter drone destroyed by Rafael’s “Drone Dome” laser system.

A rogue state or terrorist group could go further. How about programming your mini-drones to kill everyone who looks white, or black or Asian? (One Google facial recognition algorithm classified African-Americans as “gorillas,” not humans, so racist AI is a mature technology). It would be genocide by swarm.

Such a tactic might only work once, much like hijacking airliners with box cutters on 9/11. “Small drones are vulnerable to jamming, to high-powered microwaves, to other drones that might intercept them, to nets,” said Paul Scharre, an Army Ranger turned thinktank analyst. “Bullets work pretty well… I have a buddy who shot a drone out of the sky back in Iraq in 2005.” (Unfortunately, the drone was American). At least some object-recognition algorithms can be tricked by carefully applied reflective tape.

“People are working on countermeasures today,” Scharre told me, “and the bigger the threat becomes, the more people have an incentive to invest in countermeasures.”

Paul Scharre

But how do you stop tiny drones from becoming a big threat in the first place? While technology to build a “working prototype” already exists, Russell told me, the barrier is mass production.

No national spy agency or international monitoring regime can find and stop everyone trying to make small numbers of drones. But, Russell argues fervently, a treaty banning “lethal autonomous weapons systems” would prevent countries and companies from openly producing swarms of them, and a robust inspection mechanism — perhaps modeled on the Organisation for the Prohibition of Chemical Weapons — could detect covert attempts at mass production.

Without a ban, Russell said, legal mass production could make lethal swarms as easy to obtain as, say, assault rifles — except, of course, one person can’t aim and fire thousands of rifles at once. Thousands of drones? Sure.

So don’t fear robots who rebel against their human masters. Fear robots in the hands of the wrong human.

Would a ban on lethal AI actually work? Would the United States actually want it to work? That’s the question we’ll address in the fourth and final story in this series, out Monday.

[Click here to read the entire series: The ‘Killer Robots’ Debate]