Pentagon Raises Concerns About Anthropic’s AI Future
Anthropic PBC commenced this year with remarkable momentum, characterized by soaring sales, several viral products, and a substantial funding round, all of which provide the startup a significant edge in the competitive global AI landscape. On Friday, the Trump administration issued consecutive directives that pose a risk to the growth of one of the nation’s most successful artificial intelligence companies. Initially, President Donald Trump directed federal agencies to cease the use of Anthropic’s software, which has gained popularity especially as a programming assistant. Shortly after, the Pentagon designated the AI developer as a supply-chain risk — a classification usually reserved for companies from nations the US considers adversaries. The actions, which came after a tense confrontation between the San Francisco-based startup and the Pentagon regarding AI safeguards, seek to not only restrict Anthropic’s sales to the US government but also impact numerous other companies. “No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic,” Defense Secretary Pete Hegseth stated in a social media post late Friday. The complete ramifications for the company — and the AI ecosystem — are yet to be determined. At minimum, competitors like OpenAI, Alphabet Inc.’s Google, and Elon Musk’s xAI now have the chance to engage in government projects that had previously been awarded to Anthropic.
However, certain legal and policy experts cautioned that the consequences could be significantly more severe if the Pentagon proceeds with its declaration. Barring Anthropic from working with corporate customers that do business with the Defense Department would be “a death blow” to Anthropic’s business, said Charlie Bullock, a lawyer and senior research fellow at the Institute for Law & AI, a Boston-based think tank. According to Hegseth’s post, the Pentagon’s policy would effectively hinder Anthropic from collaborating with several of its major partners, including Amazon.com Inc., Bullock stated. Dean Ball expressed a similar viewpoint. “Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way,” he stated in a post on X. “This is simply attempted corporate murder.” The uncertainty strikes Anthropic at a crucial juncture. The company, established in 2021 by a group of former OpenAI employees, is anticipated to be gearing up for an initial public offering as early as this year. Anthropic has been striving to convince more businesses to invest in its popular Claude chatbot, aiming to mitigate the substantial expenses associated with AI development and validate its impressive $380 billion valuation. In a statement Friday, Anthropic described the move as “legally unsound” and “a dangerous precedent.” The company has also prepared for a potential legal confrontation regarding its software. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” it stated. “We will challenge any supply chain risk designation in court.” Some backers are worried that Anthropic’s unwillingness to yield to the Trump administration’s demands might damage the company’s reputation, potentially portraying the startup as hostile and anti-American, according to an Anthropic investor.
Nevertheless, certain investors remain hesitant to voice their concerns publicly or even to express dissent in private settings. Anthropic stands as a prized asset in numerous portfolios, with Chief Executive Officer Dario Amodei’s firm grip on the company’s trajectory prompting many venture capitalists to hold back their criticisms, even when they find themselves at odds with his decisions, according to several investors in Anthropic. Other investors backed Anthropic regardless of its decision to collaborate with the Pentagon, with some observing that the government contributes only a small amount of revenue for the model maker. The decision has garnered significant backing for Anthropic among tech leaders, with several CEOs praising its position. Hegseth had provided the company with a deadline of 5:01 p.m. on Friday to permit the Pentagon to utilize Claude for any purpose permitted by law — yet without any limitations on usage from Anthropic. The startup has emphasized that the chatbot should not be employed for mass surveillance of Americans or in fully autonomous weapons operations. Trump’s decision to instruct agencies to abandon Anthropic presented some initial risk to the firm, albeit one that is constrained in scope for a company with a revenue run rate of $14 billion. In July, Anthropic secured an agreement with the Defense Department valued at up to $200 million; however, records indicate that the Pentagon disbursed only $2 million to Anthropic in the previous year. This month, Anthropic entered into its inaugural agreement with the State Department for the utilization of Claude, with a valuation of merely $19,000. The company also reached a comprehensive agreement with the General Services Administration, allowing federal government agencies to utilize Claude for a nominal fee of $1 last year. Hegseth has established a six-month limit for the transfer of Anthropic’s services to another AI provider.
The objectives of the Defense Department’s actions are ultimately much more expansive, regarding Anthropic in a manner akin to Chinese companies that the US views as a security risk. However, Bullock stated that the legal authority Hegseth is relying on is, in fact, quite narrow. It permits the agency to prevent its contractors from utilizing Anthropic’s products for procurements associated with defense contracting — but does not explicitly extend to using Claude in their business operations. Hegseth is expected to rely on the Federal Acquisitions Security Council, which was established during Trump’s first term, to implement the policy, according to Peter Harrell. “The limit here, and where I think Hegseth is misleading at least as a legal matter, is that this should only apply to contractors in their DoD contracts,” Harrell stated. “If the Pentagon tried to force companies that contract with it from having other unrelated business with Anthropic, the courts will throw that out quite quickly,” but “I can’t say they won’t try it. Harrell said. They appear to be open to experimenting with initiatives that may be reversed by judicial decisions.” The Pentagon has yet to provide a response to the request for comment. Bullock stated, “If Anthropic launches a court challenge, that could ultimately buy it time. For example, a court could grant the company a temporary restraining order or preliminary injunction,” he said. “But we will have to wait and see.” The Pentagon’s decision has rapidly reverberated across the AI community, raising significant concerns regarding the safe deployment of powerful technology and highlighting the widespread acclaim for Claude Code in software development. According to one AI startup executive who spoke on condition of anonymity, virtually all companies that develop software engage in business with Anthropic. “Not being able to use Claude Code would be disastrous for both the industry and US competitiveness,” the executive said.









