AI tech firm Anthropic sues over blacklisting by Pentagon
Anthropic Files Lawsuit Against Pentagon’s National Security Blacklist
Anthropic, the creator of AI chatbot Claude, has initiated legal action against the Trump administration following what it describes as an “unprecedented and unlawful” decision to classify the firm as a national security risk. The lawsuit challenges the Pentagon’s move to block the company’s operations, citing concerns over military applications of its technology.
On Thursday, the Pentagon labeled Anthropic a “supply chain risk” due to its refusal to permit unrestricted military use of Claude. This marks a significant escalation in the dispute, which has been centered on how the AI system could influence warfare scenarios. Anthropic has since responded by filing two separate lawsuits, one in California federal court and another in the Washington DC federal appeals court, each targeting distinct aspects of the Pentagon’s actions.
“These actions are unprecedented and unlawful,” Anthropic’s legal documents assert. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorises the measures taken here. Anthropic turns to the judiciary as a last resort to defend its rights and halt the Executive’s campaign of retaliatory measures.”
The Department of Defense has not provided a formal response, stating its policy is to avoid commenting on ongoing legal cases. Anthropic, backed by Alphabet’s Google and Amazon, has consistently opposed the use of its AI for mass surveillance of U.S. citizens and fully autonomous weapons systems.
Defense Secretary Pete Hegseth warned Anthropic of penalties if it did not accept “all lawful uses” of Claude. Trump also endorsed the move, vowing to direct federal agencies to cease employing the AI assistant, despite its deep integration into classified military systems, including those deployed in the Iran conflict.
Designating Anthropic a supply chain risk would restrict its involvement in defense projects by leveraging tools aimed at safeguarding national security from foreign threats. This is the first instance the federal government is known to have applied such a designation to a U.S.-based company.
With a recent valuation of $380 billion, Anthropic has argued that the Trump administration’s sanctions are limited in scope, affecting military contractors only when they use Claude for defense-related tasks. Much of the firm’s projected $14 billion in annual revenue stems from businesses and government agencies employing the AI for coding and other non-combat functions.
The defense department inked agreements worth up to $200 million with major AI labs, including Anthropic, OpenAI, and Google, in the previous year. Microsoft-backed OpenAI recently announced a partnership with the U.S. military, a development that followed Hegseth’s push to blacklist Anthropic.
