Start now →

Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ

By Vismaya V · Published March 2, 2026 · 4 min read · Source: Decrypt
AI & Crypto
Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ
NewsArtificial Intelligence

Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ

Claude was reportedly embedded in U.S. Central Command even as the White House ordered federal agencies to cut all ties with the company.

Vismaya VBy Vismaya VEdited by Sebastian SinclairMar 2, 2026Mar 2, 20264 min read
Anthropic's Claude. Image: Decrypt/Shutterstock
Anthropic's Claude. Image: Decrypt/Shutterstock
Create an account to save your articles.Add on GoogleAdd Decrypt as your preferred source to see more of our stories on Google.

In brief

Hours after President Donald Trump ordered federal agencies to halt use of Anthropic’s AI tools, the U.S. military carried out a major airstrike on Iran that reportedly relied on the company’s Claude platform.

U.S. Central Command used Claude for intelligence assessments, target identification, and simulating battle scenarios during the Iran strikes, people familiar with the matter confirmed to the Wall Street Journal on Saturday

It came despite Trump’s directive on Friday that agencies begin a six-month phase-out of Anthropic products following a breakdown in negotiations between the company and the Pentagon over how the latter can use commercially developed AI systems.

Decrypt has reached out to the Department of Defense and Anthropic for comment.

“When AI tools are already embedded in live intelligence and simulation systems, decisions at the top don’t instantly translate to changes on the ground,” Midhun Krishna M, co-founder and CEO of LLM cost tracker TknOps.io, told Decrypt. “There’s a lag—technical, procedural, and human.”

“By the time a model is embedded across classified intelligence and simulation systems, you’re looking at sunk integration costs, retraining, security re-certifications, and parallel testing, so a six-month phase-out may sound decisive, but the real financial and operational burden runs far deeper,” Krishna added.

“Defense agencies will now prioritize model portability and redundancy,” he said. “No serious military operator wants to discover during a crisis that its AI layer is politically fragile.”

Anthropic CEO Dario Amodei said Thursday the company would not strip safeguards preventing Claude from being deployed for mass domestic surveillance or fully autonomous weapons. 

"We cannot in good conscience accede to their request," Amodei wrote, after the Defense Department demanded contractors allow their systems for "any lawful use."

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump later wrote on Truth Social, ordering agencies to "immediately cease" all use of Anthropic products. 

Defense Secretary Pete Hegseth followed, designating Anthropic a "supply-chain risk to national security,” a label previously reserved for foreign adversaries, barring every Pentagon contractor and partner from commercial activity with the company. 

Anthropic called the designation "unprecedented" and vowed to challenge it in court, saying it had "never before publicly applied to an American company." 

The company added that, to its knowledge, the two disputed restrictions had not affected a single government mission to date.

“The debate isn’t about whether AI will be used in defense, that’s already happening,” Krishna added. “It is whether frontier labs can maintain differentiated guardrails once their systems become operational assets under ‘any lawful use’ contracts.”

OpenAI moved quickly to fill the gap with CEO Sam Altman announcing a Pentagon deal on Friday night covering classified military networks, claiming it included the same guardrails Anthropic had sought. 

Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies.

We think our deployment has more guardrails than any previous agreement for classified AI…

— OpenAI (@OpenAI) February 28, 2026

Asked whether the Pentagon’s effective blacklisting of Anthropic set a troubling precedent for future disputes with AI firms, OpenAI CEO Sam Altman responded on X, “Yes; I think it is an extremely scary precedent, and I wish they handled it a different way.

“I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution,” he added.

Meanwhile, nearly 500 employees from OpenAI and Google signed an open letter warning that the Pentagon was attempting to pit AI companies against each other. 

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.
This article was originally published on Decrypt and is republished here under RSS syndication for informational purposes. All rights and intellectual property remain with the original author. If you are the author and wish to have this article removed, please contact us at [email protected].

NexaPay — Accept Card Payments, Receive Crypto

No KYC · Instant Settlement · Visa, Mastercard, Apple Pay, Google Pay

Get Started →