
Anthropic has taken the U.S. Department of Defense (DoD) to court after the agency labelled the AI company a “supply chain risk,” a designation the Claude maker says is both extraordinary and unlawful.
The lawsuit, filed Monday in federal court in San Francisco, follows a weeks-long dispute between Anthropic and the Pentagon over how the military can use the company’s AI systems.
At the centre of the conflict is whether the U.S. military should have effectively unrestricted access to Anthropic’s technology.
According to the complaint, Anthropic drew two clear boundaries around potential defense use of its AI models:
- It does not want its systems used for mass surveillance of Americans.
- It does not believe its AI is ready to power fully autonomous weapons that make targeting and firing decisions without human involvement.
Defense Secretary Pete Hegseth pushed back, arguing that the Pentagon should be able to use AI systems for “any lawful purpose.”
After this standoff, the DoD labeled Anthropic a “supply chain risk.” That label carries heavy practical consequences: it is typically applied to foreign adversaries, and once in place, any company or agency doing business with the Pentagon must certify that it does not use Anthropic’s models.
Anthropic’s complaint describes the designation as “unprecedented and unlawful.” The company argues that the government is punishing it for drawing ethical lines around how its AI may be used.
“The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” the filing states, framing the dispute as a matter of constitutional rights as well as commercial impact.
The case pits a fast-rising AI developer against the U.S. defense establishment over the terms of military access to commercial AI systems, and over how far the government can go in pressuring technology suppliers once they set limits on use.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.






