A federal appeals court handed the Pentagon a key victory, allowing it to maintain a blacklist against AI developer Anthropic in a case with significant implications for the use of artificial intelligence in military applications.
Back
A federal appeals court handed the Pentagon a key victory, allowing it to maintain a blacklist against AI developer Anthropic in a case with significant implications for the use of artificial intelligence in military applications.

A federal appeals court on Wednesday denied Anthropic’s request to temporarily lift a Pentagon blacklisting, dealing a blow to the AI startup in its high-stakes legal battle over the military’s use of its Claude AI models for warfare and surveillance. The ruling from the D.C. Circuit Court of Appeals allows the Department of Defense to continue barring Anthropic from contracts, a designation the company claims could cost it billions.
"In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict."
The decision creates a split between federal courts, as a San Francisco judge in a related case last month granted Anthropic a preliminary injunction, temporarily blocking a separate blacklisting order. The legal battle stems from the Pentagon designating Anthropic a "national security supply chain risk" in early March—an unprecedented move against a U.S. tech company—after Anthropic refused to allow its AI to be used for autonomous weapons or domestic surveillance.
The dispute jeopardizes Anthropic's standing as a key government contractor and could reshape the competitive landscape for AI in the multi-billion dollar defense sector. With the Pentagon now exploring alternatives from Google DeepMind and OpenAI, the case serves as a critical test of the executive branch's power over tech companies and may chill public debate on AI safety and ethics.
Anthropic faces two lawsuits because the Pentagon used two distinct statutes to justify its supply-chain risk designation. The ruling from the D.C. court on Wednesday addresses one of those, while the favorable ruling for Anthropic in a San Francisco court addresses the other. This conflict between the courts creates a confusing legal landscape for the company and the government. While the San Francisco injunction restored Anthropic's access for now, the D.C. ruling upholds the Pentagon's authority to blacklist the company under a separate law.
The D.C. court acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but deemed the company's interests "primarily financial in nature." Oral arguments for this case are scheduled for May 19.
The blacklisting marks a dramatic reversal in the relationship between Anthropic and the Pentagon. The AI company signed a $200 million contract with the Defense Department in July and was the first to deploy its models across the DOD's classified networks. However, talks stalled in September when Anthropic sought to place ethical limits on the use of its technology, specifically barring its use in autonomous weapons systems.
The dispute escalated when Defense Secretary Pete Hegseth publicly declared Anthropic a supply chain risk in late February, a move that followed a social media post from President Donald Trump ordering federal agencies to "immediately cease" all use of Anthropic's technology.
As Anthropic battles the U.S. government in court, the United Kingdom is reportedly making a bid to attract the company. British officials have proposed incentives including an expansion of Anthropic's London office and a potential dual stock market listing, according to the Financial Times. The move highlights the global competition for leadership in artificial intelligence, with the UK attempting to capitalize on the rift between Washington and one of Silicon Valley's most valuable AI startups.
This article is for informational purposes only and does not constitute investment advice.