Microsoft, Amazon Give Pentagon Greater AI Control
The Pentagon has reached agreements with additional technology companies to enhance the use of advanced artificial intelligence tools on classified military networks, as stated by the Defence Department and confirmed by two defence officials who were briefed on the matter. Nvidia Corp., Microsoft Corp., Reflection AI Inc., and Amazon.com Inc. have all recently entered into agreements with the US Defence Department “for lawful operational use,” as stated in the announcement. On Friday, the Pentagon announced on X that Oracle Corp. had joined the list of technology companies committed to deploying their AI tools on classified networks. The agreements grant the Pentagon considerable flexibility to possibly deploy sophisticated advanced AI technologies for covert combat operations, including support with targeting. The revised terms of usage, featuring “lawful operational use,” significantly dilute several of the restrictions pursued by Anthropic PBC that derailed its agreement with the Pentagon earlier this year.
Numerous technology companies currently supply AI tools to the US military; however, defense officials have been looking to broaden the terms of use since the autumn of 2025. Other technology companies that have recently entered into comparable agreements include SpaceX, OpenAI, and Google. On Friday, Oracle’s shares experienced a notable increase of 6.5 per cent, reaching $171.83. “These agreements accelerate the transformation toward establishing the United States military as an AI-first fighting force,” according to the Defence Department statement, which refers to the technology companies involved and also marks the first official Pentagon confirmation of a new accord with Google reported earlier this week. The initiative to secure new agreements with technology firms for the extensive military application of advanced AI is underway as the Pentagon accelerates its efforts to create viable alternatives to Anthropic’s Claude tool. A contentious split between Anthropic and high-ranking defense officials has revealed a persistent divide between the Pentagon and Silicon Valley regarding the imminent dangers of AI in warfare. “This agreement reflects a shared commitment between the Department of War and Oracle to help ensure that the United States leads decisively in artificial intelligence, as a matter of ongoing global leadership and national security,” Kim Lynch stated. “By bringing advanced AI into classified environments, we are translating innovation into operational advantage where and when it matters most.”
According to two Pentagon officials briefed on the talks, the Pentagon negotiated its agreement with Amazon Web Services late into Thursday. AWS has been dedicated to supporting the US military for over a decade, stated Tim Barrett, an AWS spokesperson, when asked to comment on the new deal. “We look forward to continuing to support the Department of War’s modernization efforts, building AI solutions that help them accomplish their critical missions.” Nvidia did not respond immediately, and a spokesperson for Microsoft chose not to comment. A spokesperson for Reflection was not available for comment at this time. The Pentagon disregarded Anthropic’s articulated boundaries aimed at restricting the application of AI by the US military in classified operations during recent renegotiations and attempted to remove the company from all defense supply chains. The company expressed its opposition to the use of its technology for mass domestic surveillance of US citizens or for fully autonomous weapons systems. In the wake of the fallout with Anthropic, the Pentagon has intensified its initiatives to engage additional AI companies in agreeing to broaden usage terms for their models and infrastructure on classified and top-secret networks. Furthermore, defense officials are striving to guarantee that the US military does not rely on any single company or face any specific constraints, as stated by a Pentagon official who was informed about the discussions.
Nvidia’s new agreement, for instance, grants the Pentagon significantly more leeway compared to the terms of use in earlier AI deals. The company has committed to refraining from implementing any usage policies or model licenses that would limit the Defence Department’s utilization of its models beyond what is mandated by US law and constitutional authority, as stated by an individual familiar with the agreement. Nvidia has committed to ensuring “full and effective use of their capabilities in support of Department missions,” which includes the development of autonomous weapons systems, as stated by the individual. The Department’s utilization of Nvidia models, weights, or other capabilities will align with the civil liberties and constitutional rights of Americans as mandated by law, the individual stated, a pledge that lacks any explicitly defined monitoring and evaluation mechanisms. Oracle stated, “its AI strategy is built around openness, interoperability, and choice across the entire technology stack” and emphasized that this approach will enable “the Department of War to build, deploy, and scale any model, without vendor lock-in.” Oracle stated “This approach allows the department to continuously adopt the best AI innovations available while maintaining control over their data, architecture, and long-term technology direction.” The Department has allocated a six-month period to find a replacement for Claude, currently utilized in US military operations against Iran. The dispute has now become entangled in a legal confrontation. On Thursday, Secretary of Defence Pete Hegseth referred to Anthropic’s leader as a “ideological lunatic” while standing firm on his department’s use of AI. “We follow the law and humans make decisions,” Hegseth told Congress. “AI is not making lethal decisions.”
The Pentagon’s initiative to provide the US military with advanced AI capabilities at a classified level aims to enhance “human-machine teams” capable of managing vast amounts of data, stated Cameron Stanley, the defence agency’s chief digital and AI officer, in a statement regarding the new agreements. Despite OpenAI signing a new agreement for expanded use of its models on classified networks with the Pentagon earlier this year, the tools have yet to be deployed on classified defense networks. An OpenAI spokesperson noted that implementation is, however, in progress. Numerous campaign groups have underscored the dangers associated with depending on erratic AI-assisted systems for critical life-and-death decisions. Critics have argued that AI systems can be prone to error and may lead to automation bias, which is a tendency to trust machine outputs over human reasoning. Stanley did not detail the specific methods by which the Pentagon plans to implement AI models in classified operations. He characterized them as digital instruments designed to facilitate the Pentagon’s data analysis, enhance comprehension in intricate settings, and enable “better decisions, faster.” Claude is one of the AI tools utilized on the Maven Smart System, a digital platform designed to assist in targeting and battlefield operations during operations in Iran. US Central Command has stated that it is employing a range of AI tools to accelerate processes.









