Pentagon warns to terminate Anthropic over AI protections
According to a report, the Pentagon is contemplating the termination of its partnership with the artificial intelligence firm Anthropic due to the company’s insistence on maintaining certain limitations regarding the application of its models by the US military, as stated by an administration official. The Pentagon is urging four AI companies to permit the military to utilize their tools for “all lawful purposes,” which encompasses weapons development, intelligence collection, and battlefield operations. However, Anthropic has not consented to these terms, leading to frustration within the Pentagon after months of negotiations, as reported.
The other companies mentioned are OpenAI, Google, and xAI. An Anthropic spokesperson stated that the company had not engaged in discussions regarding the use of its AI model Claude for specific operations with the Pentagon. The spokesperson stated that discussions with the US government to date have concentrated on a defined range of usage policy inquiries, encompassing strict boundaries concerning fully autonomous weapons and extensive domestic surveillance, none of which pertain to ongoing operations.
According to a report on Friday, Anthropic’s AI model Claude played a role in the US military’s operation to capture former Venezuelan President Nicolas Maduro. Claude was utilized through Anthropic’s collaboration with data firm Palantir. On Wednesday, it is reported that the Pentagon is urging leading AI companies, such as OpenAI and Anthropic, to provide access to their artificial intelligence tools on classified networks, with fewer of the typical restrictions that these companies impose on users.









