The Pentagon’s Friday move to label Anthropic’s AI models a “supply-chain risk” likely won’t stand up in court and could result in a wave of expensive legal judgments, according to legal experts and officials who spoke with Defense One. They described the move as legally “dubious.” A defense official who manages information security called the designation “ideological” rather than an accurate description of risk.
Quick recap: Last week, after AI company Anthropic and the Department of Defense failed to reach an agreement on AI safety standards, Defense Secretary Pete Hegseth said on X that he was “directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security” and that, “Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Hegseth’s comments followed President Trump’s social post directing all federal agencies to stop using Anthropic. Many have already stopped using its software.
Hegseth also said in his post that Anthropic will continue to provide services to the Defense Department for six months, “to allow for a seamless transition” to another frontier AI model.
But no one knows if the statement will result in an actual, legal designation or if it was just a negotiating tactic.
“We have not yet received direct communication from the Department of War or the White House on the status of our negotiations,” Anthropic said in its own statement Friday.
The move to bar virtually any company that works with the Defense Department from also working with Anthropic could have devastating effects for the AI firm. Adam Conner, the vice president for technology policy at The Center for American Progress, wrote on X that Anthropic relies on large-scale cloud computing providers like Amazon Web Services to train models and host services.
“It’s the equivalent of the death penalty for Claude since AWS and Google Cloud could no longer host Anthropic,” Conner said.
And the penalty doesn’t fit the alleged crime, several sources told Defense One.
The Pentagon’s stance is that allowing private companies to dictate terms of use for their products to the Pentagon could create risks or delays for soldiers during operations.
However, a defense official told Defense One that elements of U.S. Central Command used Anthropic, among other AI tools, as part of Operation Epic Fury. The official said the military had already spent hundreds of hours training the model and did so under rigorous human oversight. While they emphasized that CENTCOM has many AI tools and the move will not impact operations, they noted that the idea that it would be quick or easy to replace Anthropic’s model with one from another frontier AI company does not reflect reality.
“If a command trained more off of Claude than OpenAI’s ChatGPT, for example, putting combat data against a particular model, that model is going to outperform another provider just because you’ve trained on it for however long,” the official said.
Hegseth’s statement suggests the supply-chain risk designation stems from the belief that “Anthropic’s stance is fundamentally incompatible with American principles,” rather than a failure of the model to operate as designed, leaked intelligence, or technical vulnerability.
Anthony Kuhn, a managing partner at the New York law firm Tully Rinckey, told Defense One that designation, accompanied by the threat against Anthropic’s corporate and commercial partners, could expose the Pentagon to lawsuits—not only from the company, but also from the defense contractors it is threatening—if the Pentagon cannot prove the risk is real.
That’s because the definition of what constitutes a supply-chain risk is not up to the administration, Kuhn said, it is a matter of law: specifically Title 10, Section 3252. “It deals with any type of potential sabotage or maybe creating a back door in an IT system, or any of those risks. And in this situation, he’s not expressing a risk. In fact, they’re going to continue using the organization’s software for the next six months,” Kuhn said. Furthermore, Kuhn noted that under that law, Hegseth would not have the authority to bar private companies from working with one another.
Another defense official who specifically evaluates supply-chain and other potential intelligence threats told Defense One “there is no evidence of supply-chain risk” from Anthropic’s model. The official called the designation “ideologically driven.”
And defense contractors that obey the administration’s demand and cut ties with the company could open themselves to lawsuits, Kuhn said, despite not issuing the ban themselves. While such a scenario would depend on venue, jurisdiction, and other factors, there exists a legal doctrine called joint and several liability which “imposes on each wrongdoer the responsibility for the entire damages awarded, even though a particular wrongdoer’s conduct may have caused only a portion of the loss,” according to a 2019 Supreme Court opinion.
If Anthropic were to take that route, Kuhn said, “They would likely file suit against everybody who’s involved and just get their money one way or another, and then leave it up to everyone to fight about who owed them the money.”
Anthropic has vowed to challenge the designation in court, should it become official, but did not comment on specific legal action the company might take.
The situation represents a significant escalation of what is essentially a philosophical disagreement. The “stance” in question relates to Anthropic’s preferred safeguards for the use of AI—safeguards that prohibit the use of the model for hypothetical autonomous weapons and mass surveillance of the U.S. population. These are two use cases that “have never been included in our contracts with the Department of Defense,” and “we believe they should not be included now,” the company said in a Feb. 26 statement.
The apparent move to damage the company rather than simply walk away is already chilling relations between the Pentagon and the technology firms it is trying to attract, Jessica Tillipman, an associate dean at George Washington University Law School, told Defense One.
“If the government just thinks it’s going to keep trying these outlandish legal theories as a means to inflict maximum damage… I don’t know how any company makes a major move right now, given this,” Tillipman said. “Everyone looks at this and goes, ‘This is so legally dubious.’”
Read the full article here

