Read full test of Anthropic lawsuit here: https://cleanupcityofstaugustine.blogspot.com/2026/03/anthopif-pbc-v-us-department-of-war.html
Anthropic Sues Pentagon Over ‘Supply Chain Risk’ Label
The artificial intelligence company filed two lawsuits against the Department of Defense, saying it was being punished on ideological grounds.
Anthropic sued the Department of Defense on Monday, challenging the Pentagon’s decision to label it a “supply chain risk” and escalating a rancorous dispute over the use of artificial intelligence in warfare.
The A.I. company filed two lawsuits — one in the U.S. District Court in the Northern District of California and one in the D.C. Circuit Court of Appeals — accusing the Pentagon of using the supply chain risk designation inappropriately to punish it on ideological grounds.
The designation, which effectively cuts off Anthropic’s work with the Defense Department, is typically applied to firms that are deemed a major national security risk, such as companies with ties to the government of China. The label has never been used on an American company.
“This is a necessary step to protect our business, our customers and our partners,” Anthropic said in a statement. “We will continue to pursue every path toward resolution, including dialogue with the government.”
A Pentagon spokesman said that it did not comment on litigation as a matter of policy.
The lawsuits open a new chapter in the fight between Anthropic and the Department of Defense. The two sides came to blows last month in negotiations over a $200 million contract to provide the Pentagon with A.I. technology on classified systems. Anthropic, which is based in San Francisco, said it did not want its A.I. to be used in mass surveillance of Americans or for autonomous lethal weapons. The Pentagon said a private company could not establish policy for the U.S. government.
The talks between Anthropic and the Department of Defense eventually fell apart. Shortly thereafter, Defense Secretary Pete Hegseth announced he was labeling Anthropic a supply chain risk. Last week, the Pentagon formally notified Anthropic that it had received the supply chain risk designation.
In its lawsuits on Monday, Anthropic argued that the legal statutes for labeling a company a supply chain risk were narrow and that they did not apply to American firms. The company also said the order was ideologically motivated to penalize Anthropic. It added that its First Amendment rights to express its views were being violated.
Anthropic’s contracts with the government are already being canceled, the company said.
“Current and future contracts with private parties are also in doubt, jeopardizing hundreds of millions of dollars in the near-term,” according to the company’s filings. “On top of those immediate economic harms, Anthropic’s reputation and core First Amendment freedoms are under attack.”
Anthropic’s A.I. technology has been widely used inside the Department of Defense on classified systems, particularly to analyze vast amounts of data collected by U.S. intelligence agencies and to sort through the information quickly. Anthropic’s technology continues to be used by the Pentagon, including in operations underway in the Middle East, two people with knowledge of the matter said.
Jessica Tillipman, an associate dean at the George Washington University Law School, said the supply chain label was an extreme step by the Pentagon.
“They are transforming what is designed to be national security tools into a point of leverage for business,” Ms. Tillipman said.
Last week, a group representing some of the world’s largest tech companies sent a letter to Mr. Hegseth about the Anthropic fight and the decision to label the start-up a supply chain risk.
“We are concerned,” wrote the Information Technology Industry Council, which includes Nvidia, Google, Microsoft, Apple and Amazon. “Emergency authorities such as supply chain risk designations exist for genuine emergencies and are typically reserved for entities that have been designated as foreign adversaries.”
Anthropic has offered to continue negotiating with the Pentagon as its lawsuits wind their way through the courts. The company has also offered to help move the Pentagon off its technology and onto another A.I. system, two people familiar with the discussions said. In recent weeks, OpenAI and Elon Musk’s xAI have signed agreements with the Department of Defense to provide technology on classified systems.
(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement of news content related to A.I. systems. The two companies have denied those claims.)
OpenAI announced an agreement with the Pentagon last month, hours after President Trump ordered federal agencies to stop using Anthropic’s technology within six months.
Unlike Anthropic, OpenAI agreed to let the Pentagon use its A.I. systems for any “lawful purpose.” The company said it had also negotiated terms that allowed it to uphold its so-called safety principles by installing specific technical guardrails on its technology. The company said it included additional protections to prevent its technology from being used in mass surveillance of Americans, though critics said the terms still allowed loopholes for the Pentagon.
Julian E. Barnes and Cade Metz contributed reporting.
Sheera Frenkel is a reporter based in the San Francisco Bay Area, covering the ways technology affects everyday lives with a focus on social media companies, including Facebook, Instagram, Twitter, TikTok, YouTube, Telegram and WhatsApp.