Johnson Michael IMG_5810 3

John Beezer, Senior Advisor, US Senate Committee on Commerce, Science and Transportation, Matthew Johnson, Responsible AI CDAO and Navrina Singh, CEO at Credo AI (Sydney Freedberg / Breaking Defense)

WASHINGTON — How do you stop the military from building SkyNet from the Terminator movies by mistake? It turns out there’s an app for that — and the Pentagon wants the public to use it.

Three years ago, the Department of Defense adopted five broad principles for how it could ethically apply artificial intelligence to military missions, from eliminating racial bias in algorithms’ training data to building in kill switches in case AI goes awry. Last year, the Pentagon’s Chief Digital & AI Office (CDAO) turned those principles into a detailed implementation strategy for “responsible AI” and promised to build a toolkit for weapons acquisition officials seeking to apply the general guidance to their specific programs. And last night, a senior advisor to the CDAO’s Responsible AI team said that toolkit will be released “very soon” — and it’ll be available online to everyone interested in, or worried about, what the DoD is doing.

“It’s a web app you can get to, it’s publicly accessible,” Matthew K. Johnson told reporters at a roundtable hosted by consulting giant Booz Allen Hamilton. “It needs to be publicly releasable and usable, so our industry partners know exactly what our expectations are [and] so the public knows exactly how the DoD is thinking.”

“It’s coming out soon,” Johnson told reporters. “I can’t say when but it is very, very soon.”  And even non-US users are welcome, he went on: “A key piece of our defense strategy is integration and interoperability with partners [abroad]. It’s critical for projects like JADC2.”

RELATED: In next Global Information Dominance Experiments, CDAO looking to speed allied info-sharing

Promoting understanding, transparency and cooperation among US officials, defense contractors and foreign allies is just part of the Pentagon’s ambitious agenda here, Johnson told a public forum earlier in the evening. The grand strategic plan, he said, is to use the US military’s buying power to nudge evolving technology towards American ideals of openness and privacy — and away from the Chinese Communist Party’s authoritarian vision of AI as a tool for control and propaganda.

“We’re really trying to shape this overall ecosystem because, surprise, there are others who are trying to shape this ecosystem … a lot like Belts and Roads,” he said, alluding to Xi Jinping’s much-touted “Belt and Road” initiative to recenter global trade on China. “Responsible AI looks like this kind of soft, cushy, amorphous thing, but, actually, I think it’s a tremendous source of soft power … if we can spread the technology that spreads US values.”

“One of the things our team thinks about a lot is, how do we incentivize responsible AI?” Johnson said. “We’ve been thinking primarily in terms of carrots rather than sticks, [and] one of the big carrots we have with DoD and a $900 billion a year budget is funding.

“[So] how do we set in place these very clear requirements and criteria to demonstrate that your technology is aligned with the DoD AI ethical principle and our values … instead of some vague handwaving?” he went on.

One step was last year’s “responsible AI” implementation plan, which managed to translate the five general principles adopted in 2020 into 64 detailed “lines of effort,” from educating the workforce to publishing “model cards” explaining how each AI model works. But such formal plans are still “static documents” that program managers and their staffs may struggle to apply to their unique situation.

So the CDAO has built an online tool to walk the user through the self-assessment process, helping them figure out how to implement the five principles and 64 lines of effort in a specific program. The software is intended to cover every stage of a program, from the initial brainstorming through development and fielding to the final retirement of a technology, and to tailor the guidance it provides to the user’s specific responsibilities on the program.

For any given user, Johnson told reporters, the software is meant to answer a host of questions: “What are the responsible AI activities that I need to do? How do I do those activities? What tools do I need to use? How do I identify risks? how do I identify opportunities? [How do I] leverage all those and document those?”

It’ll take some time to work out all the bugs, Johnson acknowledged, and no one will be forced to use the tool, at least in its initial form. “At this stage, it’s just a voluntary tool or a voluntary resource, so by no means is it mandated,” he emphasized to reporters. “What we are releasing is a starting point, it’s a Version 1, and we are going to be validating it on a number of DoD use cases…and continually updating.”

 

Corrected 11/8/2023 at 10:15 am ET: The original version of this story gave the wrong first name for Matthew Johnson. The story has been updated to reflect the correct name.