Digital fighter jets

Digital fighter jets dogfight. (Getty graphic)

WASHINGTON — As the Defense Department works on its own responsible and ethical use of artificial intelligence and autonomy, a senior official said today the Pentagon wants to build international cooperation on the military development of the technologies and could call together dozens of countries in the coming months to do just that. 

The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, launched in February last year “is a really clear demonstration of a throughline in our commitment to responsible behavior,” Michael Horowitz, deputy assistant secretary of defense for force development and emerging capabilities, said today at a Center for Strategic and International Studies event. 

“I think that there’s a recognition that the sorts of norms we’re trying to promote are things that all countries should be able to get behind,” Horowitz said. “So they include things like a commitment to international humanitarian law. They include things like appropriate testing and evaluation for systems.” 

He noted that 51 countries, not including Russia or China, have signed the declaration, which the State Department says “aims to build international consensus around responsible behavior and guide states’ development, deployment, and use of military AI.”

“We’re actually working toward a potential plenary session in the first half of 2024 with those states that have endorsed the political declaration and we hope that even more will come on board, you know, before that happens and will come on board afterwards, and that includes everyone,” Horowitz said. 

RELATED: 3 ways intel analysts are using artificial intelligence right now: Ex-official

DoD has been on a path to getting the responsible and ethical use of AI and autonomy right through policies like DoD directive 3000.09, the department’s guidance on autonomous weapons, or strategies like the 2023 Data, Analytics, and AI Adoption Strategy

In 2022, the Pentagon published its long-awaited Responsible AI Strategy and Implementation Pathway, which acknowledged that the US military won’t be able to maintain a competitive advantage without transforming itself into an AI-ready organization that holds responsible AI as a prominent feature. Prior to the strategy and implementation pathway, DoD adopted five broad principles for the ethical use of AI: responsible, equitable, traceable, reliable and governable. 

RELATED: Ethical Terminators, or how DoD learned to stop worrying and love AI in 2023

Horowitz said the political memorandum “stands alongside a lot of the other AI-related initiatives that the Biden administration has launched, both domestically and internationally, over the last several months in particular.”

In October, President Joe Biden signed an AI executive order that the administration said was one of the “most significant actions ever taken by any government to advance the field of AI safety” in order to “ensure that America leads the way” in managing risks posed by the technology. As part of the executive order, the White House directed “that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government” and that federal agencies would be issued guidance for their use of AI.