The U.S. military is using Anthropic’s Claude AI in the conflict with Iran despite a presidential ban and a public dispute with the tech company over ethical guardrails against surveillance and autonomous warfare.
The U.S. military reportedly utilized Anthropic’s Claude AI model during weekend attacks on Iran and continues to do so despite a government-wide ban on the technology triggered by a dispute over humanitarian guardrails. While the Pentagon maintains the tool is used for document synthesis and logistics, Anthropic CEO Dario Amodei confirmed the company sought to draw “red lines” against mass surveillance and autonomous weaponry, stating, “Disagreeing with the government is the most American thing in the world.”
Defense Secretary Pete Hegseth has since declared the firm a supply chain risk following President Trump’s order to phase out the technology within six months, yet the Pentagon’s chief technology officer, Emil Michael, defended the current usage by asserting that “at some level, you have to trust your military to do the right thing.” As the conflict persists, it remains unclear if Israeli forces are also employing the model, though the Pentagon continues to push for the ability to use the AI for “all lawful purposes” while it seeks a long-term replacement for Claude’s capabilities.

