ICTpost News Network
The wars of the 21st century are no longer fought solely with tanks, missiles and fighter jets. Increasingly, they are shaped by algorithms, data streams and artificial intelligence systems capable of processing information faster than any human command structure. The recent joint US-Israel strikes on Iran have brought this transformation into sharp focus. At the centre of the controversy is Claude, the artificial intelligence system developed by Anthropic, which reportedly played a role in intelligence analysis and targeting during the operation.
What makes the episode remarkable is not simply the use of AI in military planning. Artificial intelligence has been gradually entering the battlefield for nearly a decade. What makes this case extraordinary is the political drama surrounding it. The US military reportedly continued using Claude to assist operational planning even as Donald Trump publicly ordered federal agencies to sever ties with Anthropic and stop using its AI tools.
The episode reveals a deeper reality: artificial intelligence has already become embedded within modern military infrastructure so deeply that disentangling it during active operations may no longer be possible.
The Invisible Analyst Behind the Battlefield
According to multiple reports, the US military used Claude as part of its intelligence and planning systems during the massive aerial campaign against Iranian targets. The AI system reportedly helped analysts process large volumes of intelligence data, identify potential targets and simulate possible battlefield outcomes before strikes were executed.
This role reflects the growing integration of AI into modern defence networks, particularly those developed under initiatives such as Project Maven. Project Maven was originally designed to help US analysts process enormous quantities of drone footage and satellite imagery during counter-terrorism operations. Over time, however, the system evolved into a broader platform for AI-assisted intelligence analysis.
In practical terms, the challenge facing military planners today is not a shortage of data but an overwhelming abundance of it. Satellites, reconnaissance aircraft, cyber-surveillance systems and electronic intelligence networks generate vast streams of information every minute. Human analysts cannot realistically sift through such volumes in real time.
AI systems such as Claude change that equation. By rapidly analysing patterns across massive datasets, they can flag potential threats, identify anomalies and prioritise targets for further review by human decision-makers. In modern warfare, this capability can significantly accelerate operational planning.

In effect, the AI became an invisible analyst embedded within the military command structure.
The Compression of War
Military strategists often describe the chain of events leading to a strike as the “kill chain”: identifying a target, tracking it, deciding to engage and then executing the attack. Traditionally, this sequence could take hours or even days, especially in complex operations involving multiple intelligence sources.
Artificial intelligence is compressing this process dramatically.
By rapidly analysing intelligence inputs and generating recommendations, AI systems can reduce decision-making time from hours to minutes. The result is what defence analysts increasingly call “decision compression” — a battlefield where the pace of war approaches the speed of computation.

The result is a new model of warfare in which software helps orchestrate the tempo of conflict.
Silicon Valley Meets the Pentagon
The involvement of Claude also highlights the increasingly complex relationship between Silicon Valley and the US defence establishment.
Anthropic, the company behind Claude, has positioned itself as a leader in responsible AI development. Its policies explicitly restrict the use of its systems for violent purposes, including the development of weapons or large-scale surveillance systems.
This stance brought the company into direct conflict with the US government earlier this year when the military reportedly used Claude in an operation targeting Nicolás Maduro, the president of Venezuela. Anthropic objected, arguing that such use violated the company’s terms of service.
The dispute escalated rapidly. In a public post, US defence secretary Pete Hegseth accused the company of undermining national security by attempting to impose ethical limits on military technology.
“America’s warfighters will never be held hostage by the ideological whims of Big Tech,” Hegseth wrote, demanding full access to AI tools for any lawful military purpose.
The clash reflects a deeper structural tension. Private technology companies now build some of the most powerful digital systems in the world, but governments increasingly want to integrate those systems into national security operations.
The result is a complex power struggle between corporate ethics and state power.
Trump’s Ban — and the Military Reality
The political drama reached its peak just hours before the Iran strikes began.
President Donald Trump ordered federal agencies to stop using Anthropic’s AI tools immediately. In a post on his social media platform, he described Anthropic as a “Radical Left AI company run by people who have no idea what the real world is all about.”
Yet despite the order, the Pentagon reportedly continued using Claude during the operation.
The reason illustrates how deeply AI systems are now embedded in military infrastructure. Removing them instantly would risk disrupting intelligence workflows and operational planning systems already built around them.
Acknowledging this reality, the Pentagon announced that Anthropic would continue providing services temporarily while the military transitioned to alternative AI platforms.
Among those alternatives is OpenAI, whose chief executive Sam Altman confirmed that the company had reached an agreement with the Pentagon to provide AI tools within classified military networks, including the use of ChatGPT.

The Rise of Algorithmic Warfare
The use of AI in the Iran operation signals a broader shift toward what analysts increasingly call “algorithmic warfare”.
In such conflicts, artificial intelligence systems assist in nearly every stage of military operations: analysing intelligence, predicting adversary movements, planning logistics and simulating combat scenarios.
Some analysts believe this transformation could fundamentally alter the nature of war.
Strategist and author Brahma Chellaney argues that technological acceleration is already reshaping global security dynamics. According to Chellaney, AI-driven military capabilities could allow powerful states to conduct complex operations with unprecedented speed and precision.
“The integration of artificial intelligence into military planning compresses the timeline of conflict,” he notes. “It allows decisions that once took hours or days to be made in minutes.”
Yet other analysts also warn that such speed carries risks. When decision cycles become extremely rapid, the margin for human oversight narrows dramatically.
Mistakes can escalate faster than diplomacy can respond.
The Ethical Fault Line
Beyond strategic implications, the use of AI in warfare raises profound ethical questions.
Critics argue that AI-assisted warfare risks distancing human decision-makers from the consequences of their actions. When algorithms filter intelligence and recommend targets, responsibility for errors can become diffused across complex technological systems.
Others fear that the speed enabled by AI could make wars easier to initiate. If military planners can rapidly simulate outcomes and execute strikes with minimal risk to their own forces, the political barriers to conflict may weaken.
Technology companies are increasingly aware of these risks. Many have attempted to impose ethical guidelines on how their AI systems can be used.
But as the Claude controversy shows, enforcing those limits against powerful governments may prove extremely difficult.
Once AI tools become embedded in military infrastructure, their use may expand far beyond the intentions of the companies that created them.
The Next Battlefield
The Iran strikes may ultimately be remembered less for their immediate geopolitical impact than for what they revealed about the future of warfare.
Artificial intelligence is no longer just a support tool on the battlefield. It is becoming a core component of military decision-making systems.
In future conflicts, AI systems may not only analyse intelligence but also predict enemy strategies, recommend operational plans and coordinate autonomous weapons platforms.
In such a world, wars could unfold at digital speed — faster than traditional command structures were ever designed to handle.
The clash between Anthropic and the Pentagon therefore represents more than a dispute between a technology company and a government agency. It reflects the emergence of a new strategic reality.
The weapons of the future will not only be missiles and drones.
They will also be algorithms.
