149th IS maintains sharp edge

149th Intelligence Squadron airmen conduct training in a computer training lab at Mather Field, California Dec. 2, 2023. The December 2023 UTA focus was on the readiness of the airmen by embracing their unit’s stated mission to “Organize, train, and equip Cyber-ISR leaders to provide intelligence in support of federal and state mission priorities​.

WASHINGTON — A landmark National Security Memorandum recently signed by President Joe Biden requires human oversight, safety testing and other safeguards for many military and intelligence applications of artificial intelligence. The memo also launches a sweeping review of how the Pentagon and intelligence agencies acquire AI, with recommendations for regulatory changes and other reforms due back next year.

However, neither the memo itself nor the accompanying Risk Management Framework [PDF] impose significant new restrictions on AI-controlled drones, munitions and other “autonomous weapons,” the chief concern of many arms control activists around the world. Instead, the RMF largely defers on that issue to existing Pentagon policy, DoD Directive 3000.09 [PDF], which was extensively revised last year to restrict, but not prohibit, autonomous weapons (some of which already exist in the form of computer-controlled anti-aircraft and missile defenses). The new policy documents, by contrast, focus on AI used to analyze information and make decisions — including about the use of lethal force.

That said, the memo does mention “a classified annex” that “addresses additional sensitive national security issues, including countering adversary use of AI that poses risks to United States national security.” The published documents do not specify what kind of “adversary use” the annex covers nor what other “sensitive” issues it might address.

RELATED: Clear guardrails mean faster progress on AI: Biden signs sweeping guidance for DoD & IC

The other major question mark, of course, is the election: Many of the mandates in the memo and the RMF won’t even take effect until next year. While a Kamala Harris administration would presumably continue Biden’s policies, the GOP platform already promises to “repeal Joe Biden’s dangerous Executive Order [published last year] that hinders AI Innovation.” And former advisors to President Donald Trump have called for a “Manhattan Project” approach to accelerate military AI. A Trump Administration might well remove all the restrictions and guardrails in the Biden plan.

The Devil’s In The Details

Current and former Biden administration officials have emphasized that the goal of the new policy is to accelerate adoption of AI by setting clear guardrails, not to hinder what they call “responsible” employment. As a result, the memo and RMF rarely impose outright prohibitions and more often allow AI development and deployment to proceed — if, and only if, an extensive checklist of best practices is followed.

One of the few blanket bans in the RFM is against using AI [to] remove a human ‘in the loop’ for actions critical to informing and executing decisions by the President to initiate or terminate nuclear weapons employment.” This is a reaffirmation of longstanding US policy that computers can’t be allowed to launch nuclear weapons without human oversight.

The framework restricts use of AI to forecast likely civilian casualties and collateral damage when planning a potential military strike. Any such use of AI requires both technical safeguards, such as continuous “rigorous testing” of the AI, and human oversight “by trained personnel.” The framework forbids computer-generated intelligence reports and analysis “based solely on AI” unless they are clearly labeled with “sufficient warnings” to the reader.

RELATED: Can tech reduce civilian deaths in conflict? Mark Milley isn’t so sure.

The framework then goes into a longer list of “high impact AI use cases,” all of which are allowed if, and only if agencies implement a detailed set of guardrails.

Many of these precautions are about protecting human rights. That includes restrictions on AI “tracking or identifying [or] classifying an individual as a known or suspected terrorist, insider threat, or other national security threat.” The framework also mandates human oversight of any AI assessing eligibility for benefits ranging from political asylum to federal employment. Yet other provisions aim to ensure that human beings remain in control of AI and can be held accountable for whatever the software does, from handling nuclear and other hazardous materials, or deploying malware online.

Before a new AI is even deployed for the first time — and retroactively for AI already in use — agencies must conduct a thorough “risk and impact assessment,” including an up-front, bottom-line cost-benefit analysis of whether AI is even the right solution for the problem at hand, as opposed to more traditional tools. The mandated guardrails include extensive testing “in a realistic context,” preferably including a “pilots and limited releases” prior to widespread deployment; analysis of “possible failure modes” and mitigations; assessment of whether the underlying data to train, test, and update the AI is accurate, adequate, and accessible; and ongoing, continuous monitoring to ensure the AI continues to perform as intended.

The mandatory best practices also address the human users of AI. They must be trained not only to operate it, but to watch for errors instead of blindly trusting the machine (what’s called “automation bias”), with reporting channels and whistleblower protections when problems arise.

Unsurprisingly, the policy does make it possible to waive some or all of these safeguards. But such waivers must come, in writing, directly from an agency’s designated Chief AI Officer, and be reviewed and renewed at least annually

Overall, the National Security Memorandum and the Risk Management Framework are not about banning sensitive uses of AI, but regulating them — without getting in the way of rapid progress in the tech race with China.

“The NSM does a good job of balancing between making clear what is not allowed and enabling rapid adoption by the national security community, when the technology and the testing environment mean you can validate the ability to do it safely,” said Michael Horowitz, a UPENN professor who, until recently, worked on AI as deputy assistant secretary of defense for emerging capabilities. And, he told Breaking Defense, the Pentagon has spent years figuring out just how to test and safeguard such systems.

“DoD and other agencies have decades of experience in developing and employing AI, and in designing policies to do so safely,” Horowitz  said. “The NSM not only builds on those lessons learned, it breaks new ground in ensuring the national security community can adopt AI with responsible speed.”