Army photo

A soldier from the Army’s offensive cyber brigade during an exercise at Fort Lewis

CAPITOL HILL: The secretive Strategic Capabilities Office is designed to jumpstart high-tech weapons projects. But today the SCO’s own director warned the Senate against placing too much trust in technology. In wartime, under assault from a savvy enemy, systems start breaking down, William Roper said, and the winner will be the side whose human beings adapt best to the chaos. That Clausewitzian reality requires three principles that will shape the Pentagon’s Offset Strategy: decentralization, autonomy, and trust. (Our words, not Roper’s).

“There’s always a desire, where you can, to do things in a centralized fashion. ‘I want to have all the data flowing to the brain in the center, and then commands will push out to the edge,'” Roper told the Senate subcommittee on emerging threats. But in wartime, he went on, “data’s not going to flow the way we want it to.”

Why? Because the enemy is going to interfere — that’s what he gets paid for, after all — by knocking out our satellites, jamming our transmissions, hacking our network, or feeding us disinformation (a Russian specialty). The architecture of the whole system has to be able to “hop” from a jammed satellite, frequency, or connection to another, Roper said, but even thought they hop their performance will inevitably degrade. As a result, both networks and the organizations that use them need to be built on the assumption that local forces on the frontline — “the edge” — may lose contact with the central HQ at the critical moment.

So technologies need to empower small units to act independently, not make them dependent on constant sensor feeds and supervision from outside. Commanders need to trust subordinates to act independently, not try to use technology to micro-manage them long-distance. And here’s the good news: American culture fosters initiative better than our authoritarian adversaries.

“The military that will be able to push the most amount of trust to the edge — assuming the enabling technology is there — is likely to win. That’s an area where we have a significant advantage,” said Roper. “When I go around and talk with our operators…and I contrast that with what I see in the rest of world, I think we have an advantage in the level of trust.”

Trusting humans isn’t a hard sell. What might be harder for Roper is convincing humans to trust machines. That’s where autonomy comes in.

The modern US military loves its drones. Unmanned vehicles like the Predator, though, are essentially remote-controlled, with a human pilot determining every movement. More advanced drones are still designed for constant human supervision. But human control and supervision rely on the uninterrupted flow of data over long-range networks — the very flow that Roper warned may fail.

The more autonomous the robot, however — that is, the more capable it is of making decisions on its own — the less it needs to send a constant sensor feed to a distant human and get a constant stream of instructions in return. So autonomy requires much less bandwidth and offers much more resilience against the network breaking down.

“A smaller pipe is easier to protect,” said Stephen Welby, assistant secretaryof defense for research and engineering, testifying alongside Roper this afternoon. Recent Pentagon simulations of hypothetical teams of manned and unmanned vehicles found it was possible “to shrink the amount of bandwidth required in very interesting ways,” Welby said. As a result, “[autonomous] systems can operate even on unreliable networks.”

Of course, letting robots operate so independently requires that we trust them. The standard, said Welby, is that “I can have confidence they’re going to have certain behaviors, then check back in with me at some future point.”