Anthropic’s Claude Artificial Intelligence, or Claude AI, was already a global buzzword, attracting worldwide discourse on its multifaceted usage even before the current US-Iran war broke out. Countless people have been using this highly advanced AI chatbot for tasks as small as writing an email to solving complex computing problems.
However, reports suggest that the same AI has now been used by the United States in planning and executing drone and missile attacks against Iran.
A report published in The Wall Street Journal suggested that the US CENTCOM used Claude in “intelligence assessments, target identification and simulating battle scenarios” during the strikes on the country.
Trump Banned Anthropic, Then How Pentagon Still Uses It
Claude AI, the flagship large language model of the latest US tech pioneer Anthropic, recently appeared in the news after US President Donald J. Trump announced that he would block the use of its product by the US government.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump had written in a Truth Social post on Friday.
Then how did the US military use the same AI model in its sophisticated attack against Iran?
The answer lies in contracts that Anthropic had charted with the federal government earlier.
Anthropic has been used by the US government and military since 2024, becoming the first major AI firm to have its technology deployed within government agencies handling classified operations.
How Claude AI Was Used Against Iran
After bombing Iran, US officials subtly hinted that Claude was used in “intelligence assessments” and “target identification” without giving any more explanation on how it was categorically deployed across the war spectrum.
Was it used in location mapping of targets? Or was it used in the identification of targets using various variables? Was it deployed with missile technology for autonomous targeting? Nobody can answer these questions, yet some earlier reports regarding Anthropic can be assessed for hints.
As per reports, in November last year, Anthropic partnered with Palantir Technologies, a data analytics company that frequently works with the Pentagon.
The collaboration integrated Anthropic’s large language model, Claude, into a military decision-support system to help with reasoning and analysis.
In January, Anthropic also submitted a proposal worth $100 million to the Pentagon to build voice-controlled autonomous drone swarming technology, Bloomberg News reported.
The idea was to use Claude to convert a commander’s spoken instructions into digital commands that could coordinate multiple drones at the same time.
However, the Pentagon rejected the proposal.
The project aimed to go far beyond simple AI tasks such as summarising intelligence reports.
Instead, the contract involved developing systems for “target-related awareness and sharing” and managing drone swarm operations from launch to mission completion, including potentially lethal missions.