Air Force photo

Predator drone operators.

CENTER FOR A NEW AMERICAN SECURITY: How do you stop a Terminator scenario before it starts? Real US robots won’t take over like the fictional SkyNet, Pentagon officials promise, because a human being will always be “in the loop,” possessing the final say on whether or not to use lethal force.

But by the time the decision comes before that human operator, it’s probably too late, warns Richard Danzig. In a new report, the respected ex-Navy Secretary argues that we need to design in safeguards from the start.

“In the design of the machine, we ought to recognize we can’t rely on the human as much as we’d like to think,” Danzig said: We need to design the machine from the start with an eye on what could go wrong.

Wikimedia COmmons

Richard Danzig

Danzig’s model is the extensive protections put in place for nuclear weapons, from physical controls to arms control. But nukes were actually easy to corral compared to emerging technologies like Artificial Intelligence and genetic editing, he told me in an interview. Nukes can’t think for themselves, make copies of themselves or change their own controlling code.

What’s more, nuclear weapons are purely that, weapons: You don’t use a nuke to decide whether you should use your nukes. Computers, by contrast, have become essential to how we get and use information.

A modern battle is too far-flung, fast-moving and complex for a human commander to take  in with his own eyes, as did Napoleon overlooking the field at Waterloo. With a few exceptions like light infantry, today’s warfighters get most of their information from screens. Pilots don’t see enemy fighters, captains don’t see enemy ships, and missile defenders don’t see enemy missiles. Instead, radar, sonar, and infrared, satellites and drones all feed into computers that collate, analyze, and “fuse” the data for them. You can claim the “human in the loop” still has the final say, but if the machine is providing them all the data for their decision, they’re not really an independent check on its operations.

The SkyNet scenario — where a military artificial intelligence turns hostile — is just one extreme case. Far more likely, Danzig argues, is simple error: human error, machine error and both kinds compounding the other.

“Error is as important as malevolence,” Danzig told me in an interview. “I probably wouldn’t use the word ‘stupidity,’ (because) the people who make these mistakes are frequently quite smart, (but) it’s so complex and the technologies are so opaque that there’s a limit to our understanding.”

Drawing on Danzig’s work and other sources, here are some sample scenarios, from simplest to most complex:

The Castle Bravo nuclear test in the Bikini Atoll: The largest explosion ever set off by the US, it exceeded estimated yield and contaminated over 600 people with fallout.

 

AI Broken Arrow

The Nuclear Age was rife with accidents and errors, from the accidental dropping of two nuclear bombs on North Carolina (they didn’t detonate) to underestimating the yield of a Bikini Atoll test and exposing over 600 people to dangerous radiation. When you add artificial intelligence to the mix, even if the human operator does everything right, the AI can still introduce errors of its own.

Machine learning, especially object recognition, is still notorious for confusing things that no human would ever mix up: a baseball bat and a toothbrush, for example (see photo), or the bright sky and the white side of a truck, as in a fatal Tesla Autopilot crash in 2016. Add deliberate deception to the mix — so-called “data poisoning” — and a hostile actor can teach the machine-learning algorithms things you don’t want, like Microsoft’s Tay chatbot that started spewing racist bile.

What’s morse, Danzig argued, while civilian AI can do extensive testing in real-world circumstances — Tesla Autopilot, Google Car and self-driving Ubers have all driven on public roads — military testing can’t really replicate what the enemy will do. (Surprising us is the enemy’s job, after all). Like human soldiers, AI will encounter at least some enemy tactics and technologies for the first time in battle, with unpredictable results.

So imagine an AI that systematically misclassifies civilian construction workers carrying pipes as hostile insurgents with rocket launchers, or vice versa. In the press of battle, does the human in the loop have time to double-check? Imagine a cybersecurity AI that mistakenly attributes a virus attack on the US electrical grid to Russia when it was actually (say) North Korea: The human in the loop still gets to decide whether or not to retaliate, but it’s against the wrong target. Or imagine a coding error that causes AI to mix up inert training ammunition with live rounds (humans have made similar mistakes with nukes): The human in the loop wouldn’t know until they’d already given the order to fire.

Roberto Maltchik Repórter da TV Brasil

Wreckage of Air France 447

Dropping The Ball

Humans and artificial intelligences can both make mistakes, but a new breed of error is born when they have to work together. The worst such incident so far is the crash of Air France 447, which killed all 228 people on board. A minor sensor glitch caused the autopilot to switch off and return the plane unexpectedly to full manual control. The junior pilot who’d been left on the controls apparently panicked and pulled up until the plane stalled — something that wouldn’t have been possible if normal computerized safety overrides had stayed in place.

There are similar, though less horrific, cases in the US military. During the 2003 invasion of Iraq, a Patriot missile shot down a British jet, killing both crewmen, because the system’s computer misidentified the radar contact as hostile and the human in the loop didn’t override the automated decision to fire in time. So the US Army ordered the Patriot batteries taken off automatic mode — until a misunderstood order led one unit to unintentionally re-enable it. That time a US Navy pilot died.

(Click here to read a fuller discussion of these and similar cases in our series on “Artificial Stupidity.”)

Poor design choices can produce an AI that’s confusing to its human operators; poor personnel systems can produce human operators who lack the training to understand the system. Both were clearly at work in the Air France case, where the autopilot shouldn’t switched itself off in the first case and the pilot should’ve known what to do when it did. Both problems could easily apply in the military, whose equipment is often notoriously user-unfriendly in design and whose operators are often either newly trained 18-year-olds or officers who change jobs too rapidly to become experts. If the “human in the loop” doesn’t really know what they’re doing, how is that an effective control?

Trench warfare in World War I.

Guns of August

So far we’ve talked about individual failures. But what happens when multiple failures compound each other? For a historical example of how a complex technological system, imperfectly understood by its operators, can cascade out of control, Danzig points to the start of World War I — the infamous “Guns of August.”

The Great Powers of 1914 didn’t have Artificial Intelligence, of course, but they did have a complicated system of new technologies never before used together on such a scale: railroads, telegraphs, the bureaucracy of mass mobilization, quick-firing artillery and machineguns. The potential to deploy huge armies in a hurry, before the other side was ready, put pressure on decisionmakers — the humans in the loop — to strike first lest their adversaries do so. In Germany’s case in particular, the mobilization plans were so complex, so interwoven, and so impossible for humans to modify on the fly that, when the Kaiser asked if he could go to war against Russia only without involving France, his generals flatly told him no. It would throw the army into chaos, ensuring defeat.

Warfare has only grown more complex in the century since World War I. The military already relies on software and networks to manage global logistics and operations. Pentagon leaders talk about using artificially intelligent systems to sort through the masses of data and help humans make decisions, as we’ve reported in our series the War Algorithm. This opens up some hopeful possibilities. Maybe an AI could modify complex war plans on the fly in a way human minds could not, creating the kind of options that the Kaiser longed for. It also creates some fearful prospects. What if the plan is so complex that humans don’t really understand it?

Modern technology can also create the same pressure for a first strike that the technology of 1914 did. Cyber weapons are really just software that must be exquisitely tailored to specific targets, as Stuxnet was to the Iranian nuclear program. If the enemy starts patching the vulnerabilities your software would exploit, you face a “use it or lose it” situation. Computer networks, satellites in orbit and other modern infrastructures are also relatively fragile, giving a strong advantage to whichever side strikes first.

President Obama and Chinese President Xi Jinping

The Start of a Solution

There are technological solutions to parts of the problem, Danzig argued. To prevent artificial intelligence from proliferating out of control, you can program it with built-in off switches, like Stuxnet, which was set to shut down after a certain time. DARPA’s Safe Genes program is trying to design similar off switches for artificially altered DNA sequences, for example, by inserting genes that suppress the rest if triggered by a certain enzyme. We should also simply make a greater effort to design and test our systems against the possibility of error, Danzig urged, with extensive use of Red Teams to give outside perspective and avoid groupthink.

But part of the solution must be diplomatic, Danzig said. Even if the US is the first to develop a particular technology, we can’t assume we and our allies will keep a monopoly on it. The history of everything from the atomic bomb to satellites to stealth aircraft makes that clear. We need to plan for tech to get stolen or copied.

At the Intelligence Community’s equivalent of DARPA, IARPA, its director Jason Matheny actually now has a checklist of things to thing about before launching new programs, reproduced in Danzig’s report:

  1. What is your estimate about how long it would take a major nation competitor to weaponize this technology after they learn about it? What is your estimate for a non-state terrorist group with resources like those of al Qaeda in the first decade of this century?
  2. If the technology is leaked, stolen, or copied, would we regret having developed it? What if any first mover advantage is likely to endure after a competitor follows?
  3. How could the program be misinterpreted by foreign intelligence? Do you have any suggestions for reducing that risk?
  4. Can we develop defensive capabilities before/alongside offensive ones?
  5. Can the technology be made less prone to theft, replication, and mass production? What design features could create barriers to entry?
  6. What red-team activities could help answer these questions? Whose red-team opinion would you particularly respect?

But what happens once the genie is out of the bottle? We need to reach out not only to friendly nations, Danzig argues, but also to adversaries like China and Russia to come up with some kind of common approach, as we did in the Cold War about nukes. Yes, nuclear arms control required laborious negotiation and extensive verification for imperfect results, but on balance it made the world safer, Danzig argues. We have nine nuclear powers today instead of the dozens many experts expected in the 1960s; we have hotlines and inspections and other ways to share information. The same model can apply to new technologies like AI.

“We need to start to plan with our potential competitors or opponents in these areas so we together understand, ‘here are the risks of a mistake,'” Danzig said. “Let’s recognize those and figure out ways we can … diminish that and to deal with them when they arise.”