Textron photo

Textron Ripsaw M5 robot in an armed configuration

UPDATED from DIB press conference WASHINGTON: A Pentagon-appointed panel of tech experts says the Defense Department can and must ensure that humans retain control of artificial intelligence used for military purposes.

“Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems,” the Defense Innovation Advisory Board stated as its first principle of ethical military AI. Four other principles state that AI must be reliable, controllable, unbiased and makes decisions in a way that humans can actually understand. In other words, AI can’t be a “black box” of impenetrable math that makes bizarre decisions, like the Google image-recognition software that persistently classified black people as gorillas rather than human beings.

The board didn’t delve into the much-debated details of when, if ever, it would be permissible, for an algorithm to make the decision to take a human life. “Our focus is as much on non-combat as on combat systems,” board member Michael McQuade, VP for research at Carnegie Mellon University, at a press conference on the report.

In most cases, current Pentagon policy effectively requires a human to pull the trigger, even if the robot identifies the target and aims the gun. But the military is also intensely interested in non-combat applications of AI, from maintenance diagnostics to personnel management to intelligence analysis, and these systems, too, need to be handled responsibly, McQuade said.

So we’d boil the report’s fundamental principle down to this: When an artificial intelligence accidentally or deliberately causes harm — in the worst case, if it kills civilians, prisoners, or friendly troops — you don’t get to blame the AI and walk away. The humans who built the machine and turned it loose are morally and legally responsible for its actions, so they’d damn well better be sure they understand how it works and can control it.

“Just because AI is new as a technology, just because it has reasoning capability, it does not remove the responsibility from people,” McQuade said. “What is new about AI does not change human responsibility….You definitely can’t say ‘the machine screwed up, oh well.’”

IAI photo

IAI’s cargo robot, REX, is designed to follow infantry squads carrying extra supplies and weapons, or even evacuate casualties.

Ethical, Controllable, Impossible?

Now, the proposition that humans can ethically employ AI in war is itself a contentious one. Arms control and human rights activists like the Campaign to Stop Killer Robots are deeply skeptical of any military application. Celebrity thinkers like Stephen Hawking and Elon Musk have warned that even civilian AI could escape human control in dangerous ways. AI visionaries often speak of a coming “singularity” when AI evolves beyond human comprehension.

By contrast, the Defense Innovation Board – chaired by former Google chairman Eric Schmidt — argues that the US military has a long history of using technology in ethical ways, even in the midst of war, and that this tradition is still applicable to AI.

“There is an enormous history and capacity and culture in the department about doing complex, dangerous things,” McQuade told reporters. The goal is to build on that, adding only what’s specifically necessary for AI, rather than reinvent the entire ethical wheel for the US military, he said.

“The department does have ethical principles already,” agreed fellow board member Milo Medlin, VP for wireless at Google. What’s more, he said, it has a sophisticated engineering process and tactical after-action reviews that try to prevent weapons from going wrong.  “The US military has been so good at reducing at collateral damage, has been good about safety, because of this entire [process],” he said.  “The US military is very, very concerned about making sure its systems do what they are supposed to do and that will not change with AI-based systems.”

“Our aim is to ground the principles offered here in DoD’s longstanding ethics framework – one that has withstood the advent and deployment of emerging military-specific or dual-use technologies over decades and reflects our democratic norms and values,” the board writes. “The uncertainty around unintended consequences is not unique to AI; it is and has always been relevant to all technical engineering fields. [For example,] US nuclear-powered warships have safely sailed for more than five decades without a single reactor accident or release of radioactivity that damaged human health or marine life.” (Of course, no one has put those nuclear warships to the ultimate safety test of sinking them).

“In our three years of researching issues in technology and defense,” the board continues, “we have found the Department of Defense to be a deeply ethical organization, not because of any single document it may publish, but because of the women and men who make an ongoing commitment to live and work – and sometimes to fight and die – by deeply held beliefs.”

Army graphic

An Army soldier interacts with a virtual comrade in an “augmented reality” simulation.

Five Principles, 12 Recommendations

While the advisory board’s recommendations are not binding on the Defense Department, the Pentagon did ask for them. The board spent 15 months consulting over 100 experts – from retired four-star generals to AI entrepreneurs to human rights lawyers – and reviewing almost 200 pages of public comments. It held public hearings, roundtable discussions, and even conducted a wargame drawing on classified information before it came out with its five principles and 12 recommendations.

The principles are worth quoting in full – with some annotations to explain them. Defense Department use of AI, the board says, should be:

  • Responsible: Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems.” This principle is both fuzzier and more fundamental than a straightforward ban on what activists call lethal autonomous weapons systems, AI that can decide on its own to use deadly force against a human target. The board’s scheme could conceivably allow such a “killer robot” – but the humans who design and commanded it would be responsible for any accidents and atrocities. What’s more, they’d be responsible in the same way for a purely benign AI going wrong, such as a medical robot that injected the wrong drug and killed the patient.
  • Equitable: DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.” One of the inherent weaknesses of modern learning is that the machines learn by crunching vast amounts of data, and if that data is wrong, or systematically biased in some way, the algorithm will simply echo that error – or even make it worse. An experimental Amazon AI, for instance, decided that because its database of current employees was overwhelmingly male, it should exclude female candidates from the hiring process – regardless of their qualifications. Errors like that, or the Google image-processor that didn’t realize black people were human, are problematic enough in the business world, but in a military application they could misclassify innocent civilians as legitimate targets.
  • Traceable: DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.” Machine learning relies on extraordinarily complex mathematical algorithms that modify themselves and mutate as they learn. The resulting mass of code is often incomprehensible even to the humans who wrote it, let alone to laypeople. DARPA and intelligence agencies have made a major push for explainable AI that can lay out its criteria for decisionmaking in ways its users can understand.
  • Reliable: DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use.” The Defense Department has a well-established process for testing hardware, but software is much trickier, because it needs such frequent updates that it’s impossible to pin down a definitive final configuration to test. Machine learning software is even harder because it continually modifies itself. Some of the board’s detailed recommendations suggest changes to the testing process.
  • Governable.”  This provision was actually amended during the DIB meeting this morning, which bears some discussion.

The original wording read as follows: DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and disengage or deactivate deployed systems that demonstrate unintended escalatory or other behavior.” This language isn’t just calling for “killer robots” to have an off switch. It’s suggesting military AI should have some ability to self-diagnose, detect when it’s going wrong, and deactivate whatever is causing the problem. That requires a level of awareness of both self and the surrounding environment that’s beyond existing AI – and predicting potential unintended consequences is difficult by definition.

The revised wording of the final report changed “disengage or deactivate….”  to the more specific “human or automated disengagement or deactivation.” In other words, the decision to hit the off switch could be made either by a human or by a machine.

“The DoD should have the ability to turn things off, to detect [problems] and turn them off through an automated system,” Medlin said. That safety feature doesn’t have to be an artificially intelligent autonomous system itself, he said: You could use conventional software with predictable IF-THEN heuristics to monitor the more intelligent but less predictable AI, he said, or even in some cases a hardware cut-out that makes certain actions physically impossible. (One historical example is how old-fashioned fighter planes had interruptor gear that kept their machineguns from firing when the propeller blade was in the way).

The original wording also allowed for a human to “disengage or deactivate” an errant system, Medlin continued, but fellow board member Danny Hillis “felt very strongly that the word human should be in there.”

The language itself is neutral on whether you should have a human, an automated non-AI system, or an AI controlling the off switch, McQuade said: “The principle that we’re espousing is that … you need to be able have a method of detecting when a system is doing something that it’s not intended to do.”