On Sept. 11, 2001, the U.S. military possessed just handful of robot aircraft. Today, the Air Force alone operates more than 50 drone “orbits,” each composed of four Predator or Reaper aircraft plus their ground-based control systems and human operators. Smaller Navy, Marine and Army drones number in the thousands.

Since they do not need to provide the oxygen, avionics and other equipment a pilot needs, drones can fly longer, for less, than most manned planes. And that’s not all. “Unmanned systems are machines that operate under a ‘sense,’ ‘think,’ ‘act’ paradigm,” wrote Peter Singer, a Brookings Institution analyst and author of Wired for War. In other words, drones can, in theory, pile their own intelligence and initiative onto that of their human masters.

Unmanned Aerial Systems are arguably the defining weapon of the post-9/11 era of warfare — and have enjoyed investment commensurate with their status: no less than $25 billion annually, year after year and continuing. The coming decade could see even more profound drone development as technology and acceptance reach critical mass. “Automation is key to increasing effects, while potentially reducing cost, forward footprint and risk,” then-Col. Eric Mathewson wrote in the Air Force’s 2009 UAS Flight Plan.

But there’s an artificial limit on this potential. It’s not technology or even funding that really constrains robotic warplane development. Rather, it’s the willingness of human beings to surrender increasing degrees of control to mobile, lethal, thinking machines whose autonomy may mean they fall outside existing law. Trust and accountability are holding back our robots.

Autonomous UAS and human warriors all make mistakes. Missions fail. Innocents get hurt or die. When a person screws up, he’s tried and punished if guilty. Justice is served. You can’t, however, take a robot to court.

So who would be at fault following an errant drone strike? The programmer of the robot’s targeting software? The commanding officer of the UAS squadron? The regional combatant commander?

“For now, our laws are simply silent on whether autonomous robots can be armed with lethal weapons,” Singer wrote. “We therefore must either enact a ban on such systems soon or start to develop some legal answers for how to deal with them.”

At this point, a ban is unthinkable. “You have an entire generation of young troops and officers who have gone from not using robots to not contemplating going out on an operation without one,” Singer told Breaking Defense. A legal solution is the only viable way forward, the sooner the better.

Singer proposed a flexible legal regime. “If a programmer gets an entire village blown up by mistake, he should be criminally prosecuted … Similarly, if some future commander deploys an autonomous robot and it turns out that the commands or programs he authorized the robot to operate under somehow contributed to a violation of the laws of war … then it is proper to hold the commander responsible.”

New laws can’t happen fast enough. Every day the X-47 inches closer to its first carrier landing. Boeing, meanwhile, has revived the basic X-45 design as the larger, more powerful Phantom Ray. It flew for the first time in April. Other advanced drone designs include General Atomics’ Avenger and the RQ-170 built by Lockheed Martin. All could benefit from more autonomy.

‘Bots on a short leash
The Predator and its larger cousin the Reaper, both built by General Atomics, are hands-on robots — “Remotely-Piloted Aircraft,” to use the Air Force’s term. Their every movement is steered by teams of pilots sitting in trailers at the aircraft’s launch site and at Air Force and CIA bases in the U.S. and allied nations. A mix of satellite and line-of-sight signals connects the pilots to their robots and beams back images of what the drones “see”; commands to fire missiles or drop bombs are issued by humans. In short, the Predator and Reaper do very little thinking on their own.

In the early years of armed drones, there was a good reason for this. “The sophistication of the human thinking process and the human sensors have yet to be replicated by a computer,” David Vesely, a retired Air Force lieutenant general, said in 2006.

But around the time Vesely spoke those words, engineers at Boeing were discovering that the latest sensors, processors and algorithms perhaps could produce robotic warplanes with nearly human-like thinking processes. These improved drones wouldn’t be fully autonomous, as they would still require a human being to feed them mission parameters before a flight. But they would be much more autonomous than any Predator or Reaper. Once airborne, they’d be mostly on their own.

The newer generation of drones would match Singer’s definition of a robot warrior. “They carry sensors that gather information about the world around them, processors that use that information to make appropriate decisions and effectors that act to create change in the world around the machine, such as by movement or firing a weapon.”

Boeing’s work on drone autonomy began back in the late 1990s but found its best application in the on-again, off-again X-45 initiative. The jet-powered, flying-wing X-45 was originally an experimental Air Force program aimed at producing a pilot-less bomber. Paired with Northrop Grumman’s similar X-47 program, the X-45 passed to the fringe-science Defense Advanced Research Projects Agency in 2003 and got canceled less than three years later.

Amid these programmatic upheavals, the X-45 managed to fly around 50 test missions, and proved it could steer itself into defended territory, identify targets and fire weapons, all without human intervention. “The main issue we always ran into was that no one trusted it,” a Boeing engineer who worked on the X-45 told Breaking Defense, on condition of anonymity. “If a UAV is nearly fully autonomous and puts a bomb on a school bus and not a supply truck, who gets held up for the penalty?”

To this day, that same mistrust prompts program officials to include high degrees of human intervention in robotic systems — and that can seriously degrade performance. In 2009, Massachusetts Institute of Technology professor Missy Cummings devised an experiment that tasked a human operator with coordinating large numbers of highly autonomous UAS.

Cummings found that the drones’ effectiveness increased as human intervention decreased. “Poor performance was exacerbated by a lack of operator consensus to consider a new plan when prompted by the automation, as well as negative attitudes towards Unmanned Aerial Vehicles in general.”

If you love something, set it free
Northrop’s X-47 is the natural result of this imbalance between performance and human control. The bat-shaped ‘bot survived its tenure at DARPA and transferred to the Navy for the $1-billion Unmanned Carrier Air System Demonstration program, which aims to fly a robotic warplane off a carrier deck in 2013. The X-47 could form the basis of the world’s first armed, jet-powered, fighter-class robot warplane, and a possible centerpiece system for the next decade of UAS. But in its current form, the X-47 is badly handicapped by concerns over accountability.

The current X-47B can be highly autonomous, but also includes a redundant “man in the loop.” Northrop’s “air-ship interface” includes radio links and algorithms allowing a carrier and an X-47B to communicate with each other. The carrier steers the drone through crowded airspace and can even guide it in for a deck landing while the operator watches, poised to override the machine-to-machine planning. In July, Northrop successfully tested this system using the USS Eisenhower and a manned F/A-18D surrogate plane.

The air-ship interface is just one facet of the X-47B’s autonomy. Elsewhere across its mission profile, the drone does not need human guidance, but waits for it anyways. “Even though it’s possible for a UAS to find a target, identify it and give those coordinates electronically to a weapon, it won’t do that unless it’s told to,” Carl Johnson, a Northrop vice president, told Breaking Defense. “The technology is there, but there is still a need for a human in the loop.”

Reducing the human’s role could mean a freer and more effective X-47 and, by extension, a far more powerful future drone air force. But that can’t happen without a legal consensus regarding robot accountability.

The first decade after 9/11 proved that drones and people can make powerful teams. The second decade could see robots proving their worth all on their own. But only if we can hold them — that is, someone who controls them — accountable.