Anthropic wins preliminary injunction in Trump DOD fight

Anthropic wins preliminary injunction in Trump DOD fight


CEO and co-founder of Anthropic Dario Amodei speak onstage during the 2025 New York Times Dealbook Summit at Jazz at Lincoln Center on December 03, 2025 in New York City.

Michael M. Santiago | Getty Images

A federal judge in San Francisco granted Anthropic’s request for a preliminary injunction in its lawsuit against the Trump administration. 

Judge Rita Lin issued the ruling on Thursday, two days after lawyers for the artificial intelligence startup and the U.S. government appeared in court for a hearing. Anthropic sued the administration to try to reverse its blacklisting by the Pentagon and President Donald Trump’s directive banning federal agencies from using its Claude models.

Anthropic sought the injunction to pause those actions and prevent further monetary and reputational harm as the case unfolds. 

Anthropic issued the following statement on the ruling, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.” 

“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Judge Lin wrote in the order. A final verdict in the case could still be months away. 

During Tuesday’s hearing, Lin pressed the government’s lawyers about why Anthropic was blacklisted. Her language in the order was even sharper.

“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” she wrote.

Anthropic’s suit followed a dramatic couple weeks in Washington D.C., between the Department of Defense and one of the most valuable private companies in the world.

In a post on X in late February, Defense Secretary Pete Hegseth declared Anthropic a so-called supply chain risk, meaning that use of the company’s technology purportedly threatens U.S. national security. The DOD officially notified Anthropic about the designation in a letter earlier this month.

Anthropic is the first American company to publicly be named a supply chain risk, as the designation has historically been reserved for foreign adversaries. The label requires Defense contractors, including Amazon, Microsoft, and Palantir, to certify that they do not use Claude in their work with the military. 

The Trump administration relied on two distinct designations – 10 U.S.C. § 3252 and 41 U.S.C. § 4713 – to justify the action, and they have to be challenged in two separate courts. Because of that, Anthropic has filed another lawsuit for a formal review of the Defense Department’s determination in the U.S. Court of Appeals in Washington. 

Shortly before Hegseth declared Anthropic a supply chain risk, President Donald Trump wrote a Truth Social post ordering federal agencies to “immediately cease” all use of Anthropic’s technology. He said there would be a six-month phase-out period for agencies like the DOD.

“WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company run by people who have no idea what the real World is all about,” Trump wrote.

The Trump administration’s actions surprised many officials in Washington who had come to admire and rely on Anthropic’s technology. The company was the first to deploy its models across the DOD’s classified networks, and it was championed for its ability to integrate with existing Defense contractors like Palantir. 

Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude’s deployment on the DOD’s GenAI.mil AI platform in September, talks stalled.

The DOD wanted Anthropic to grant the Pentagon unfettered access to its models across all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance. 

The two failed to reach an agreement, and now, the dispute will be settled in court. 

“Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor,” Lin said during the hearing Tuesday. “I don’t see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law.

Judge says Pentagon actions appear aimed at crippling Anthropic
Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.



Source link


Discover more from stock updates now

Subscribe to get the latest posts sent to your email.

Leave a Reply

SleepLean – Improve Sleep & Support Healthy Weight