President Trump on Friday directed all federal agencies—including the Defense Department—to “immediately cease all use” of frontier AI firm Anthropic’s technology, though he also said there would be a six-month “phase out period.”
Trump’s announcement followed a tense back-and-forth between Anthropic and the Pentagon, which widely uses the San Francisco company’s popular AI platform, Claude, in classified and unclassified networks but took issue with the company’s refusal to give the Pentagon unrestricted access to its models.
In a Thursday statement ahead of the Pentagon’s Friday deadline, Anthropic CEO Dario Amodei said he refused to allow Claude to be used for mass surveillance of U.S. citizens or to guide fully autonomous weapons, an argument Trump framed as trying to “strong arm” the Defense Department and force it to “obey their terms of service.”
“I am directing every agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” Trump said in a Truth Social post. “We don’t need it, we don’t want it, and will not do business with them again.”
Trump said there would be a six-month “phase-out period” for agencies using Anthropic’s products at various levels, including classified settings and among civilian agencies. Trump threatened Anthropic with punishment should the company refuse to help in the phase-out. As Defense One’s Patrick Tucker reported Feb. 26, it may take several months or longer for the government to replace Anthropic’s tools.
“Anthropic had better get their act together and be helpful during this phase out period, or I will use the full power of my Presidency to make them comply, with major civil and criminal consequences to follow,” he said.
In his own post, Defense Secretary Pete Hegseth said he was ordering his department to “designate Anthropic a Supply-Chain Risk to National Security.”
Hegseth did not explain why a supply-chain risk would be permitted to operate in classified networks for up to six more months.
Amodei had noted this “contradictory” action in his Thursday statement.
“They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”
Founded in 2021, Anthropic has developed models and tools that are already widely used across the federal government, largely through its partnership with leading cloud provider Amazon Web Services, through which it first gained a foothold in the Defense Department and intelligence agencies. Anthropic, along with xAI, Google and OpenAI, received $200 million defense contracts last July to bolster the Pentagon’s push to harness AI.
Meanwhile, the General Services Administration, which manages hundreds of billions of dollars’ worth of contracts on behalf of all agencies, said in a statement Friday it would remove Anthropic from its Multiple Award Schedule and USAI.gov. Federal Acquisition Services Commissioner Josh Gruenbaum tweeted that GSA has terminated Anthropic’s OneGov deal, ending the availability of those contracts across the Executive, Legislative and Judicial branches.
“GSA stands with the President in rejecting attempts to politicize work dedicated to America’s national security,” GSA Administrator Edward C. Forst said in a statement. “Building resilient, secure, and scalable AI solutions demands alignment, trust, and a willingness to make hard calls. We’re committed to delivering results for Americans, and working with our AI industry partners who fit the bill.”
The rhetoric used by Trump, Hegseth, Pentagon spokesperson Sean Parnell, and Defense Undersecretary for Research and Engineering Emil Michael was notable for its stridency.
Hegseth posted: “…@AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of ‘effective altruism,’ they have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives…”
Michael posted, “…It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk…”
And yesterday, Parnell posted that DOD only seeks the ability to “use Anthropic’s model for all lawful purposes,” adding that the idea that the Pentagon wants fully autonomous weapons or mass surveillance is a false narrative “peddled by leftists in the media.”
But in his statement, Amodei said those are the only two limits he insists on.
In “a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said in his statement.
Anthropic did not respond to an immediate request for comment on Friday.
Bradley Peniston contributed to this report.
Read the full article here

