US Military Employed Claude for Iran Strikes but Trump Disagreed
Shortly after United States President Donald Trump expressed his criticism of the artificial intelligence firm Anthropic, reports indicate that the US military utilized its AI model, Claude, to aid in its operations against Iran. The military deployed the AI system to analyze intelligence, identify potential targets, and run combat scenario simulations, the report stated. The US military, in conjunction with the Israel Defence Forces, executed a coordinated strike on Iran during the early hours of Saturday. Hours later, Trump and Israel’s Prime Minister Benjamin Netanyahu asserted that the strikes resulted in the death of Iran’s Supreme Leader Ayatollah Ali Khamenei, a claim that was confirmed on Sunday by Iranian authorities.
The strike commenced shortly after Trump labeled Anthropic a “radical left and woke company” and instructed all federal agencies to stop utilizing its AI tools. “I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology.” We do not require it, we do not desire it, and we will not engage in business with them again! A six-month phase-out period will be implemented for agencies such as the Department of War that are utilizing Anthropic’s products, across different levels. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the full power of the presidency to make them comply, with major civil and criminal consequences to follow,” Trump said in a post on Truth Social on Friday.
Furthermore, US Defence Secretary Pete Hegseth criticized Anthropic for “arrogance and betrayal”, asserting that “America’s warfighters will never be held hostage by the ideological whims of Big Tech.” He said “Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition.” Claude was also deployed by the US military in its raid to capture Venezuelan President Nicolás Maduro in January. The application of the AI tool faced criticism following Anthropic’s objections to its use for violent purposes, including weapon development or mass surveillance. In an interview, Anthropic chief executive officer Dario Amodei stated that the company is willing to collaborate with the US Department of Defence, as long as any engagement aligns with its established boundaries. “Tonight, we reached an agreement with the Department of War to deploy our models in their classified network.”
Throughout our interactions, the DoW demonstrated a profound commitment to safety and a willingness to collaborate in order to attain the most favorable outcome. “AI safety and wide distribution of benefits are the core of our mission,” OpenAI CEO Sam Altman said on Sunday.Two of our most critical safety principles are the bans on domestic mass surveillance and the accountability of individuals for the use of force, which encompasses autonomous weapon systems. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement,” he added.









