Pentagon’s Assault on American Ingenuity Under ‘National Security’
The Department of War is fulfilling the expectations set by its newly adopted name. Regrettably, its aim is a crucial American corporation. Defence Secretary Pete Hegseth has issued a deadline to Anthropic Chief Executive Officer Dario Amodei, requiring the removal of two restrictions on the military’s use of the company’s AI by Friday at 5:01 p.m. The limitations imposed are clear: there shall be no mass surveillance of American citizens, and fully autonomous weapons must not operate without human oversight. Anthropic consented to all other matters, including missile defense and cyber operations. This is the first and only frontier AI lab dedicated to classified systems. The technology played a role in the capture of Nicolás Maduro. This is not a company that advocates for pacifism. It drew two lines. However, prior to the expiration of the deadline, President Donald Trump prohibited all government departments from utilizing Anthropic’s AI. Following the deadline, the Pentagon labeled Anthropic as a “supply chain risk.” The designation, typically assigned to foreign entities such as Huawei, prohibits all defense contractors from engaging in business with Anthropic. The Pentagon is meant to confront America’s adversaries, not its most significant assets and core values.
In 2018, a significant number of Google engineers put their names to a letter stating that their company “should not be in the business of war.” Google acquiesced to their demands and withdrew from Project Maven, its AI contract with the Pentagon. That was disgraceful. US servicemembers are entitled to the finest innovations from US technologists, and the Pentagon is justified in asserting that, within wide parameters, it should have the autonomy to utilize those tools as it sees fit. Envision a Delta Force operative meticulously reviewing the terms of service prior to discharging a weapon. No one desires that. However, that is not the situation at hand. There are clear boundaries regarding two applications that a majority of Americans reject, as current AI technology lacks the reliability necessary for these tasks, and a Pentagon spokesperson has stated that the military has “no interest” in exploring them. This suggests that the confrontation may revolve around issues beyond military capability, or that the Pentagon is not being transparent regarding its intentions. These are restrictions that deserve universal support. I utilize Claude, the AI developed by Anthropic.
During research for a recent column and every single link it generated was fabricated. This phenomenon is referred to as hallucination, and it is not an issue that can be resolved through improved engineering. A 2025 paper by researchers at OpenAI and Georgia Tech presented a mathematical proof demonstrating that hallucinations cannot be completely eradicated with existing AI architectures. When this occurs in my research, I squander an afternoon. When it occurs in a weapons system, someone dies. Hallucination may very well be the least of the concerns associated with weaponised AI. This week, Kenneth Payne at King’s College London released a study that compares three prominent AI models in simulated geopolitical crises. The models utilized nuclear weapons in 95 percent of scenarios. None ever chose to surrender or withdraw, even in the face of defeat. When Anthropic asserts that AI is not reliable enough for autonomous weapons, it is indeed being generous. Domestic surveillance is a clear and unmistakable boundary. Amodei himself has stated that a sufficiently powerful AI could “gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” No administration, regardless of party affiliation, should be entrusted with that power directed at the American public.
However, the confrontation also pertains to something even more fundamental. Voltaire once remarked that the British had a penchant for executing an admiral occasionally, “to encourage the rest.” The administration is implementing that strategy with respect to Anthropic. It aims to instill fear in every American company. David Sacks, the White House AI czar, has criticized Anthropic’s restrictions as “woke AI,” positioning the debate within the context of the ongoing culture war. And the repercussions for Anthropic would be significant. The company has successfully raised $30 billion, achieving a valuation of $380 billion. A supply chain designation would compel Boeing and Lockheed Martin to cut their connections. Investors refrain from funding companies that the government seeks to dismantle. Numerous leaders from Silicon Valley contributed millions to this administration. They were positioned behind the president during his inauguration. They are contributing to his new ballroom. They have remained mostly quiet while the administration has extracted equity from Intel, imposed export taxes on Nvidia and AMD, and demanded compliance from nearly all others. If the government can take such actions against a $380 billion company for declining to assist in spying on Americans, then no company is secure. The CEOs who empowered this administration must recognize that it is now turning against the industry. They have the option to voice their concerns now, or they can bide their time for their moment of reckoning.









