Conflict between Pentagon-Anthropic concerning military AI use
A significant conflict regarding the military’s application of artificial intelligence came to light this week when Defence Secretary Pete Hegseth abruptly ended the Pentagon’s collaboration with Anthropic and other government entities, invoking a law aimed at addressing foreign supply chain risks to brand a US company with a scarlet letter. President Donald Trump and Hegseth have raised alarms about rising AI star Anthropic, accusing the company of jeopardizing national security. This comes after CEO Dario Amodei stood firm on concerns that the company’s products might be utilized for mass surveillance or autonomous armed drones. The San Francisco-based company has pledged to take legal action in response to Hegseth’s proposal to classify Anthropic as a supply chain risk, marking an unprecedented application of a law designed to address foreign threats against a US entity. Anthropic stated it would contest what it described as a legally unsound action “never before publicly applied to an American company.” The impending legal confrontation may significantly impact the distribution of power within Big Tech at a pivotal moment, alongside the regulations surrounding military applications of AI and other safeguards designed to avert technological threats to human life. The recent controversy has led to a significant advantage for OpenAI, the creator of ChatGPT, which has capitalized on the situation to offer its technology to the Pentagon following Anthropic’s objections to certain terms set by the Trump administration. This development is poised to intensify the existing tensions between OpenAI CEO Sam Altman, who faced a temporary removal by his board in late 2023 due to trustworthiness issues, and Amodei, who departed from OpenAI in 2021 to establish Anthropic, motivated in part by apprehensions regarding AI safety.
The Department of Defense’s decision to designate Anthropic as a risk to the nation’s defense supply chain will terminate its contract with the AI company, valued at up to USD 200 million. The Pentagon has stated that it will also prohibit other defense contractors from engaging in business with Anthropic. Trump stated on Truth Social that most government agencies must immediately cease the use of Anthropic’s AI, while allowing the Pentagon a six-month timeframe to phase out the technology already integrated into military platforms. Anthropic contends that Hegseth lacks the legal authority to terminate business relationships with other defense contractors. In a statement, the company noted, “Any company that still holds a commercial contract with Anthropic can continue to use its products for non-defense projects.” The supply chain risk designation was established to provide American military leaders with a mechanism to mitigate the Pentagon’s vulnerability to companies that may present a potential security threat. The list has generally featured companies with connections to adversaries, including telecom giant Huawei, associated with China, and cybersecurity specialist Kaspersky, linked to Russia. In the case of Anthropic, the designation serves as a cautionary note to other AI and defense companies: Fail to meet our demands and you will be blacklisted.”We don’t need it, we don’t want it, and will not do business with them again!” Trump stated via social media. Trump’s six-month grace period for the Pentagon effectively creates an opportunity for other companies to obtain the classified security clearances required to collaborate with the agency. Anthropic states that it has not yet received formal notification regarding Hegseth’s designation. “When we receive some kind of formal action, we will look at it, we will understand it and we will challenge it in court,” Amodei vowed during an interview.
Currently, Anthropic is working to persuade businesses and government agencies that the supply chain risk designation from the Trump administration impacts the use of Claude, its AI chatbot and computer coding agent, solely for military contractors engaged in Department of Defense projects. “Your use for any other purpose is unaffected,” Anthropic wrote in its statement. Clarifying that distinction is essential for Anthropic, as a significant portion of its anticipated USD 14 billion in revenue this year is derived from businesses and government agencies utilizing Claude for computer coding and various other tasks. According to an announcement disclosing an investment that valued the company at USD 380 billion, more than 500 customers are paying Anthropic at least USD 1 million annually for Claude. Anthropic’s Claude technology has gained significant traction, emerging as a viable replacement for a wide range of business software tools currently sold by major tech companies like Salesforce and Workday. The potential has led to a significant decline in the stocks of companies offering business software as a service this year. However, with Anthropic now identified as a supply chain risk, there is growing uncertainty regarding whether its customers will continue to feel at ease utilizing Claude for non-military purposes, potentially attracting Trump’s disapproval. Any widespread hesitation to adopt Claude, despite the significant progress it has achieved over the past year, could hinder the advancement of AI in the US at a moment when the nation is striving to maintain its lead over China in a technology anticipated to transform the economy and society. Simultaneously, Anthropic and Amodei may now possess a prominent platform to advocate for establishing stronger safeguards regarding the functioning of AI. “”No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company stated. “We will challenge any supply chain risk designation in court.”
In his interview, Amodei characterized Anthropic’s conflict with the Trump administration as a defense of democratic principles. “Disagreeing with the government is the most American thing in the world,” Amodei said. “And we are patriots. In everything we have done here, we have stood up for the values of this country.” Shortly after its rival faced consequences, OpenAI’s Altman revealed on Friday night that the company had reached an agreement with the Pentagon to provide its AI for classified military networks. However, Altman stated that the AI restrictions that were the focal issue in Anthropic’s disagreement with the Pentagon are now firmly established in OpenAI’s new partnership. In a memo obtained by The Associated Press, Altman informed OpenAI employees: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.” The reasons behind the Pentagon’s decision to accept OpenAI’s red lines while rejecting those of Anthropic remain ambiguous. In his memo, Altman stated that the company is confident it can “de-escalate things” by collaborating with the Pentagon while maintaining robust safety protections. OpenAI’s agreement with the Trump administration coincided with its announcement of raising an additional USD 110 billion, contributing to an infusion that values the San Francisco-based company at USD 730 billion. However, OpenAI could encounter significant backlash if its collaboration with the Pentagon is perceived by US consumers of ChatGPT as prioritizing profit over the safety of AI.
The Anthropic rift may also present new opportunities for Musk, who co-founded OpenAI with Altman in 2015 before the two experienced a contentious split over safety concerns and financial matters. Musk has leveled accusations of fraud and other deceptive conduct against Altman in a case set to proceed to trial in late April. Musk currently manages the AI chatbot, Grok, which the Pentagon intends to grant access to classified military networks, raising concerns about its safety and reliability, especially in light of ongoing government investigations into its production of sexualized deepfake images. Musk has expressed support for the Trump administration in its conflict with Amodei, stating on his social media platform X that “Anthropic hates Western Civilization.” Google, having developed a suite of widely utilized AI tools based on its Gemini technology, may also be vying for additional contracts with the US military. However, a vocal segment of its workforce has been urging executives to steer clear of agreements that would contravene the company’s former motto, “Don’t be evil.” Executives at Google have yet to make any public comments regarding Anthropic’s split with the Trump administration.









