Anthropic has filed a lawsuit against the U.S. government to prevent the Pentagon from adding the company to a national security blocklist, claiming the designation is unlawful and violates free speech and due process rights.
The legal action follows a Department of Defense letter confirming Anthropic was labeled a supply chain risk. The designation threatens Anthropic’s business with federal agencies and could impact its market position as a key AI provider.
In late February, Defense Secretary Pete Hegseth pressured Anthropic to remove safeguards from its AI systems. CEO Dario Amodei refused to allow the model to be used for mass surveillance or autonomous weapons on February 27. Hegseth responded by threatening the supply chain risk designation and canceling a $200 million contract. President Trump ordered all federal agencies to cease using Anthropic the same day.
Anthropic agreed to collaborate with the Department on an orderly transition to another AI provider. The lawsuit characterizes the government’s actions as an “unprecedented and unlawful […] campaign of retaliation.”
OpenAI quickly made a deal with the Department of Defense following the dispute. CEO Sam Altman stated OpenAI’s safety principles prohibit domestic mass surveillance and human responsibility for the use of force, including autonomous weapons. OpenAI wrote into its contract that its AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.
Caitlin Kalinowski, OpenAI’s head of robotics hardware, resigned this weekend in response to the Defense Department deal. Kalinowski wrote on X that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
Anthropic said in a statement that “these actions are unprecedented and unlawful.” The company added, “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech.”
Engadget received a statement from an Anthropic spokesperson. The spokesperson said, “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners.”
Anthropic stated it will continue to pursue every path toward resolution, including dialogue with the government. The company had previously agreed to an orderly transition despite the supply chain risk designation.




