
WASHINGTON — Defense startup Anduril is teaming up with ChatGPT-maker OpenAI in a “strategic partnership” that Anduril says will “develop and responsibly deploy advanced artificial intelligence (AI) solutions for national security missions” — particularly in countering drones.
“By bringing together OpenAI’s advanced models with Anduril’s high-performance defense systems and Lattice software platform, the partnership aims to improve the nation’s defense systems that protect U.S. and allied military personnel from attacks by unmanned drones and other aerial devices,” Anduril said in a press release Wednesday. “The Anduril and OpenAI strategic partnership will focus on improving the nation’s counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time.”
The collaboration comes as the Pentagon is racing to find ways to defend its troops and facilities, both abroad and at home, from the threat of drones of all sizes, a threat that’s come to the fore rapidly since Russia’s invasion of Ukraine, especially.
RELATED: Hundreds of drone incursions reported at military installations over past few years, NORTHCOM says
Breaking Defense recently observed a military exercise in the Colorado mountains during which different companies demonstrated their own counter-drone solutions for the homeland, from cyberattacks to nets. In July, the Pentagon conducted a similar experiment, this time attempting to defend against drone swarms.
“No one capability, whether kinetic or non-kinetic, in itself could really just beat this kind of [attack] profile,” Col. Michael Parent, chief of acquisitions & resources at the Army-led Joint Counter-small Unmanned Aircraft System Office, said at the time. “What we saw was they really do need a full system of systems approach, a layered approach.”
Officials and experts have held up AI as a potential key aid in defeating drone swarms, allowing much faster identification of several threats that would otherwise overwhelm current systems and their human operators. In October, defense industry giant Northrop Grumman announced it was adding AI to an Army command system to better defend against the drone threat.
However, the Pentagon is also grappling with the policy and ethical considerations of integrating AI into its operations, especially any missions involving kinetic fires. In other applications, like chat programs, the DoD has shown it’s especially wary of potential mistakes current popular AI systems can make.
Anduril appeared to acknowledge that concern, and in its release Wednesday said the two firms’ “shared commitment to AI safety and ethics is a cornerstone of this new strategic partnership.” The collaboration, Anduril said, will be “subject to robust oversight.”