Terminator army (credit: Warner Brothers)

Terminator army (credit: Warner Brothers)

IN FLIGHT TO ANDREWS AFB: Defense Secretary Ashton Carter is pushing hard for artificial intelligence — but the US military will “never” unleash truly autonomous killing machines, he pledged today.

“In many cases, and certainly whenever it comes to the application of force, there will never be true autonomy, because there’ll be human beings (in the loop),” Carter told Sydney and fellow reporter John Harper as they flew home to Washington.

TechCrunch video still

Defense Secretary Ashton Carter at the TechCrunch conference in San Francisco.

Carter’s trip to Austin and San Francisco had been all about outreach to the information technology community. In particular,, he said, “we’re making big investments” in autonomy, which is the centerpiece of Carter’s Third Offset Strategy to retain America’s high-tech edge. But, he emphasized, technology must operate within legal and ethical limits.

This is the issue that Vice Chairman of the Vice Chiefs of Staff, Gen. Paul Selva, calls the Terminator Conundrum. The prestigious Defense Science Board, which recently released its summer study on the issue of autonomy, called for immediate action on the development of autonomous capabilities at the same time that it stressed the need for building verifiable trust in such weapons.

DSB did not state whether weapons should be allowed to kill humans without a human in the loop. But the study authors say that, “when something goes wrong, as it will sooner or later, autonomous systems must allow other machine or human teammates to intervene, correct, or terminate actions in a timely and appropriate manner, ensuring directability. Finally, the machine must be auditable—in other words, be able to preserve and communicate an immutable, comprehensible record of the reasoning behind its decisions and actions after the fact.”

Carter came down on the side of human intervention from the start. “Whatever the mix (of manned and unmanned systems), there’s always going to human judgment and discretion,” Carter said. “That’s both necessary and appropriate.”

But isn’t that unilateral disarmament, I asked, when countries like Russia and China are at least talking about autonomous weapons control? As Army War College professor Andrew Hill and retired colonel Joseph Brecher argued in a recent essay, no one may particularly want a world with independent killer robots, but if there’s a big tactical disadvantage to making your robots wait for slow-moving human brains to order them to fire, then the logic of the prisoner’s dilemma forces both sides to go autonomous. No less a figure than the Pentagon’s top buyer, Frank Kendall, has publicly worried that, by insisting on human control, the US will suffer a self-inflicted disadvantage against less scrupulous foes.

Carter and his deputy secretary, offset architect Bob Work, advocate “human-machine teaming,” a symbiotic approach in which humans provide insight, objectives, and guidance to the computers that carry out their orders. It’s essentially analogous to how commanders lead their human subordinates today, Carter argued. The subordinate, be it man or machine, acts on its own knowledge but within the tactical, legal, and ethical bounds set by its superiors.

“Whether it’s a subordinate command, a manned aircraft, or an autonomous system, when you send it to use force, you want it to use the information on site to have the best effect,” Carter said, “(but) you set things up (in advance), give orders and instructions such that everything that is done, is done in a way that is compatible with the laws of armed conflict… as well as American military doctrine.”

DARPA's Sea Hunter (ACTUV) unmanned ship

DARPA’s Sea Hunter (ACTUV) unmanned ship

Many important military missions don’t involve the use of lethal force, Carter added, and those are the first fields we’ll see autonomous decision-making anyway. Missile defense often comes up in this context, since allocating different weapons — interceptors, lasers, jammers — to incoming missiles requires making technical judgments at several times the speed of sound. Today, Carter emphasized cyber and electronic warfare, the manipulation of digital information moving over a network (cyber) and/or through the electromagnetic spectrum (EW). (The two fields overlap in the case of wireless networks).

“People tend to want to think of autonomous systems for the use of lethal force,” Carter said, “but their most likely applications in the near-term and mid-term are for such tasks as scanning networks for vulnerabilities, scanning incoming traffic, and doing the kind of work that a cyber defense analyst needs to do today by hand.” Artificial intelligence could handle the microsecond-by-microsecond spread of a computer virus or the lock-on of an enemy targeting radar better than could slower-moving human brains.

Giving an AI control of cyber defenses or radar jammers doesn’t give it the capability to kill anyone — at least not directly. But in a modern military, protecting networks, both wired and wireless, is still a matter of life and death. While we won’t yet be trusting robots to have their finger on the trigger, we’ll still be trusting them with our troops’ lives.