Anthropic warns judge US AI ban might cost billions

Wed Mar 11 2026
Julie Young (758 articles)
Anthropic warns judge US AI ban might cost billions

Anthropic PBC informed a judge that it could potentially face losses amounting to billions of dollars in revenue this year and requested prompt action on its plea to prevent the Trump administration from designating the company as a US supply-chain risk following a dispute with the Pentagon regarding artificial intelligence safety concerns. The startup presented its argument for urgency to US District Judge Rita F. Lin during a hearing in San Francisco, just one day after Anthropic filed a lawsuit against the Defense Department regarding the supply-risk designation. The contention revolves around the startup’s request for guarantees that its AI would not be utilized for the mass surveillance of Americans or the deployment of autonomous weapons.

Michael Mongan stated on Tuesday that the federal government’s actions have prompted over 100 enterprise customers to reach out to the company, expressing uncertainty about their ongoing collaboration with Anthropic. He also stated that a financial services company has paused its negotiations with Anthropic concerning a $50 million contract, a pharmaceutical company requested to shorten the duration of its contract by 10 months, and a financial technology company “explicitly tied” the reduction of its $10 million contract to $5 million to Anthropic’s issues with the federal government. Mongan stated that Anthropic’s chief financial officer has estimated that the potential harm to its 2026 revenue could range from hundreds of millions of dollars to billions of dollars. A hearing regarding Anthropic’s request is scheduled for April 3. The judge has rescheduled it to March 24.

Mongan sought assurance from the federal government that it would refrain from any retaliatory measures against Anthropic prior to the upcoming hearing — including the possibility of issuing an executive order that could affect the AI startup. “I’m not prepared to offer any commitments on that issue,” stated James Harlow. Anthropic is seeking a judicial decision to eliminate the supply-chain risk designation and to compel US agencies to rescind related directives. The company asserts that it is being excluded for opposing the administration and contends that the legal principles involved impact every federal contractor whose opinions are unwelcome to the government. Last week, the Pentagon officially informed Anthropic of its decision. Chief Executive Officer Dario Amodei subsequently released a statement asserting that the government’s actions were not “legally sound” and had left the company with “no choice but to challenge it in court.” Anthropic has garnered backing from the technology sector.

In a joint letter to the judge, numerous AI scientists and researchers from OpenAI and Google — both competitors and, in Google’s case, an investor — conveyed their support for Anthropic. It was stated that current AI systems cannot “safely or reliably handle fully autonomous lethal targeting, and should not be available for domestic mass surveillance of the American people.” Microsoft, a stakeholder in OpenAI and Anthropic, submitted its own brief requesting that a judge temporarily halt the government’s actions, as these could potentially postpone all current Defense Department “contracting for IT products and services.” Microsoft has issued a caution regarding the substantial expenses that government suppliers may incur in order to eliminate Anthropic software, noting that the distinctiveness of Anthropic’s offerings might result in a lack of alternatives for some.

Julie Young

Julie Young

Julie Young is a Senior Market Reporter and Analyst. She has been covering stock markets for many years.