Networks & Digital Warfare

AI giant Anthropic says it ‘cannot in good conscience’ agree to Pentagon demands

The Pentagon has threatened to deem the AI firm a supply chain risk, something CEO Dario Amodei says is contradictory.

U.S. Air Force Master Sgt. Shane Keahiolalo, 169th Air Defense Squadron, Hawaii Air National Guard, tests the new Battle Management Training NEXT system at the Western Air Defense Sector, Aug. 26, 2021, Joint Base Lewis-McChord, Washington. (U.S. Air National Guard photo by Maj. Kimberly D. Burke)

WASHINGTON — AI company Anthropic has said that it will not agree to demands issued from the Pentagon regarding how its software can be used, despite threats from the Defense Department to use legal means to punish the company.

In a statement posted to Anthropic’s website late today, CEO Dario Amodei said that “these threats do not change our position: we cannot in good conscience accede to [the Pentagon’s] request.”

The statement is the latest in a days-long conflict between the Pentagon and the tech company. Reportedly at the center of the controvery are Anthropic’s policies that limit the potential use of its Claude AI model in lethal autonomous operations and for mass domestic surveillance.

Pentagon spokesperson Sean Parnell said the DoD has “no interest” in using AI for either task, but chafed at the existence of the guardrails. DoD Undersecretary for Research and Engineering Emil Michael said earlier today that the DoD “would never” engage in mass surveillance in violation of the Fourth Amendment, but argued, “We won’t have any BigTech company decide Americans’ civil liberties.” Michael previously argued that it would be “not democratic” to “let any one company dictate a new set of policies above and beyond what [laws] Congress has passed.”

After Michael’s post today, Parnell issued a stark warning on X.

“Here’s what we’re asking: Allow the Pentagon to use Anthropic’s model for all lawful purposes,” Parnell wrote. “This is a simple, common-sense request that will prevent Anthropic from jeopardizing critical military operations and potentially putting our warfighters at risk. We will not let ANY company dictate the terms regarding how we make operational decisions.

“They have until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for” the Defense Department, he wrote.

In his Thursday letter, Amodei pushed back on the idea the company is refusing to work with the department. “I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he said, noting the company has previously willingly cut off funding from China and that it has already been working with the Pentagon.

However, Amodei said that there are two areas where “we believe AI can undermine, rather than defend, democratic values,” listing mass surveillance and fully autonomous weapons. Anthropic is unwilling, Amodei said, to remove safeguards on those two issues.

Department leaders “have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’ — a label reserved for US adversaries, never before applied to an American company — and to invoke the Defense Production Act to force the safeguards’ removal,” the CEO wrote. “These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

“It is the Department’s prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”

Michael fired back on X accusing Amodei of having a “God-complex.”

“He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk. The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech company,” he wrote.

Last summer, the Pentagon’s Chief Digital & AI Office awarded Anthropic, Google, xAI, and OpenAI contracts worth up to $200 million apiece to customize their popular generative AI applications for military use. Classified versions of Anthropic’s Claude AI are also available to Defense Department personnel through Amazon and Palantir, Semafor has reported.

Amodei reportedly met with Secretary of Defense Pete Hegseth this week in an attempt to iron out the issues in person, to no avail.

UPDATED 2/26/2025 at 9:56pm ET to include Emil Michael’s response to Anthropic’s statement.