Trump Team Takes Legal Stand Against Anthropic’s AI Tool Ban
The Trump administration pledged to engage in a legal battle to remove Anthropic PBC from all US government agencies after a disagreement regarding the application of the company’s artificial intelligence technology. “For national security reasons, the terms of service for plaintiff Anthropic PBC’s artificial intelligence technology have become unacceptable to the Executive Branch,” the US Justice Department, which is representing the government, stated Tuesday in a court filing. Anthropic, the creator of the Claude chatbot, has filed a lawsuit to prevent the Defense Department from declaring that the company presents a risk to the US supply chain, intensifying a critical conflict regarding the protections surrounding AI technology utilized by the military. Anthropic, advocating for restrictions on the deployment of its products, contended that an extended legal battle could result in billions of dollars in lost revenue.
The company is requesting that the court grant a preliminary injunction to prevent the government’s ban from remaining in effect during the ongoing legal battle. In their filing, DOJ lawyers claimed that during negotiations in early 2026 between the Defense Department and Anthropic to include a provision in its contract allowing the Pentagon to utilize its technology for any lawful purpose, Anthropic “refused” to accept the term because of its usage policies for Claude. During the negotiations, “Anthropic’s behavior more generally caused the department to question whether Anthropic represented a trusted partner with whom the department was willing to contract in this highly sensitive area,” according to the filing.
The agency expressed concerns that allowing Anthropic to maintain access to its technical and operational warfighting systems “would introduce unacceptable risk” into Defense Department supply chains, as stated in the filing. “After all, AI systems are acutely vulnerable to manipulation, and Anthropic could attempt to disable its technology or preemptively alter the behavior of its model either before or during ongoing warfighting operations, if Anthropic — in its discretion — feels that its corporate “red lines” are being crossed,” the lawyers wrote. Emil Michael stated in a filing that Anthropic exhibited “hostility” during its negotiations with the Pentagon. The company “appears to be taking a negotiation posture meant principally to benefit its public perception that is not centered on truth or fact,” he added.
An Anthropic spokesperson stated that the company is currently reviewing the government’s filing. A hearing is scheduled for Tuesday. “Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and our partners,” the spokesperson stated. Under the guidance of President Donald Trump, the Pentagon and various federal agencies are reallocating their AI initiatives to alternative providers, following a risk designation that is usually applied to firms from nations considered adversarial by the United States. The startup sought guarantees from the government that its AI would not be utilized for mass surveillance of Americans or for the deployment of autonomous weapons. Defense Secretary Pete Hegseth, in a Feb. 27 social media post, stated that the US military would continue to utilize Anthropic’s technology tools for “no more than six months to allow for a seamless transition to a better and more patriotic service.”









