DoD photo

Lt. Gen. Jack Shanahan (left) and Pentagon CIO Dana Deasy (right) brief the press on principles for ethical AI.

PENTAGON: It took 15 months for the Defense Innovation Board to settle on the five broad principles for ethical use of artificial intelligence that Defense Secretary Mark Esper officially endorsed, in modified form, today. It’ll take longer than that to figure out what they mean.

DoD photo

Lt. Gen. Jack Shanahan

“As hard as it was, as challenging as it was, for the DIB to do the 15-month study, in some ways that was the easy part. We’re about to embark on the really challenging part,” the director of the Joint AI Center, Lt. Gen. Jack Shanahan, told reporters this afternoon. “The real hard part of this is taking the AI delivery pipeline” – from the initial algorithms and data sets, field testing, and training human users, to holding commanders accountable for lethal errors – “and understanding where those ethics principles need to be applied.”

“They’re broad principles for a reason,” he said. “Tech adapts, tech evolves. The last thing we wanted to do was put handcuffs on the department to say what we could and could not do. So the principles now have to be translated into implementation guidance.”

Why is this so hard? Well, just look at the five principles as stated in today’s press release. The Ten Commandments they are not:

  1. Responsible.  DoD personnel will exercise appropriate levels of judgment and care, while remaining responsible for the development, deployment, and use of AI capabilities.
  2. Equitable.  The Department will take deliberate steps to minimize unintended bias in AI capabilities.
  3. Traceable.  The Department’s AI capabilities will be developed and deployed such that relevant personnel possess an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including with transparent and auditable methodologies, data sources, and design procedure and documentation.
  4. Reliable.  The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire life-cycles.
  5. Governable.  The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.

That’s…awfully vague, I said at the press conference this afternoon. When will we get the detailed guidance that tells service members, civil servants, and contractors what they have to do and what they must not?

Senate Armed Services Committee video screenshot

Dana Deasy testifies before Congress.

At the moment, Pentagon CIO Dana Deasy replied, “We’re actually in Step 2. Step 1 was the signing of the memo [by Esper.] Step 2 is now starting communications. It’s really important that we get out and actively communicate what we mean by this, the very question that you raised.”

The Pentagon has already created a bureaucratic hub for this dialogue, an ethics subcommittee of the high-level AI Steering Committee. Shanahan’s hired a director for this subgroup – he wouldn’t reveal her name today, just her gender. She, in turn, will bring in expertise from the Office of the Secretary of Defense, the armed services, the operational Combatant Commands around the world, and other federal agencies (such as from the Intelligence Community.)

The 66-page Defense Innovation Board study is “a tremendous starting point,” Shanahan said, but the Pentagon and industry need to refine it into detailed guidance for every step in the process of developing, testing, fielding and using AI. He said that process may require rewriting the existing DoD Directive 3000.09 on “autonomous and semi-autonomous functions in weapon systems,” which was written in 2012 before the current upsurge in AI development (and which contains some gaping loopholes).

DoD will also reach out to industry, testing the waters with what Shanahan called “non-obligatory language” in future contracts: essentially, non-binding, non-enforceable contract provisions that encourage companies to think through what it would take to implement one or more of the principles in their work.

Of course, there are many in industry and academe who are deeply suspicious of any military employment of AI. Shanahan himself headed the Project Maven AI intelligence-gathering initiative: Google worked on Maven, but initially kept its involvement secret, then dropped out after a revolt among employees.

“If we had had the AI ethics principles three years ago [when launching Maven], and we were transparent about what we were trying to do and why we were trying to do it, maybe we would have a different outcome,” Shanahan said.

So will the new principles – and whatever regulatory, policy, and contractual implementation follows – assuage AI engineers’ anxiety about working with the Pentagon?

“We would be doing these AI ethics principles regardless of the angst in the tech industry — and sometimes I think the angst is a little hyped — but we do have people that have serious concerns about working with the Department of Defense,” Shanahan said. “We do see this as a unique opportunity to work with academia and the tech industry on a set of a principles. I think we’ll find we have far more in common that we do differences.”

“In fact, the person I chose to lead our implementation plan was out on the West Coast recently … and had some discussions with some of the biggest companies in industry,” he said. “I won’t name them, but I’ll tell you, there was a thirst for having this discussion, [and] they unanimously praised the DIB’s work … but what the team also found in talking with the companies is that nobody is very far along in this area of ethics implementation.”

Shanahan and Deasy said they expect DoD will work with industry to continuously refine the guidance for years to come as technology continues to evolve. It won’t be a once-and-done promulgation of commandments written in stone for all time, but an ongoing dialogue with industry, academia, and the public.

The hope is that all this transparency smooths the path to civil-military cooperation on AI – but if it sometimes slows things down, so be it, Shanahan said. “Sometimes speed to market in the tech industry matters more than anything else,” he said. “That’s not the case for the Department of Defense. We will move as fast as we can but while abiding by these five principles.”

Not every major military will do the same, he warned: “I do not believe, sitting here in this room this afternoon, that China and Russia are having any conversation like we’re having today.”