Inside the US Ban on Claude: Its Role in Airstrikes Revealed
On March 1, 2026, The Atlantic reported key details about Anthropic’s negotiations with the US military. Sources revealed that just last Friday morning, the Anthropic team received news that the Pentagon was prepared to make concessions. However, by that afternoon, they discovered that the Pentagon still intended to use AI to analyze vast amounts of data from American citizens, including chat logs, search histories, GPS tracks, and even credit card transactions. Anthropic’s management immediately halted the negotiations, resulting in the deal falling through.
Another insider disclosed the truth about the US’s “AI autonomous weapons.” These are machines that can lock onto and attack targets without human intervention. Anthropic did not oppose the existence of AI autonomous weapons but expressed concerns about the reliability of their models, fearing they could cause collateral damage. They also could not accept a cloud deployment solution. The Pentagon planned to invest up to $13.4 billion in the 2026 fiscal year for these systems, which range from individual drones to drone swarms that can operate in both air and sea.
This dispute came to light following President Trump’s announcement on social media that he would ban Anthropic, causing a stir in Silicon Valley. Hundreds of Google and OpenAI employees signed an open letter supporting Anthropic’s decision to uphold its principles. Even OpenAI’s CEO, Sam Altman, spoke out, stating, “This is no longer just an Anthropic issue; it’s an industry-wide problem.” However, OpenAI later signed a large contract with the US military.
Interestingly, according to the Wall Street Journal, just hours after the US announced it would stop using Anthropic’s AI tools, Trump utilized these tools to launch a large-scale airstrike against Iran. On March 1, the Iranian government confirmed that Supreme Leader Khamenei was attacked, raising questions about whether AI was involved. Insiders confirmed that command centers worldwide, including the US Central Command, were using Claude for intelligence assessment, target identification, and combat simulation.
The US Military’s Use of Claude Amid the Ban
On March 1, US media reported that the airstrike against Iran utilized Anthropic’s Claude system. Details included:
- The US Central Command used Claude for intelligence gathering in the Middle East;
- Claude was employed for intelligence assessments, target identification, and combat simulation scenarios;
- The US government stated that phasing out Claude would take six months;
- Similar intelligence usage occurred during the incident involving President Maduro’s arrest.
Just hours before this, President Trump had announced the ban on the system. Until Pete Hegseth acted to terminate the US government’s partnership with Anthropic, the company’s leadership believed they were still moving forward with the deal.
The Pentagon insisted on renegotiating the contract with Anthropic, as their AI model was the only one permitted to access US federal government classified systems. The goal of the negotiations was to remove ethical restrictions imposed on the model.
Sources revealed that on Friday morning, Anthropic was informed that Hegseth’s team was ready to make significant concessions. Previously, the Pentagon had sought to leave room for maneuvering in the agreement with Anthropic. While they promised not to use Anthropic’s AI for large-scale domestic surveillance or fully autonomous killing machines, they later added qualifiers like “depending on the circumstances,” suggesting these terms could be adjusted based on official interpretations of specific situations.
Upon learning that the US government was willing to remove these phrases, the Anthropic team felt relieved. However, another challenge arose: by Friday afternoon, they discovered that the Pentagon still wanted to use their AI technology to analyze vast amounts of data collected from American citizens.
This data could include questions users asked common chatbots, Google search histories, GPS location tracks, and even credit card transaction details, all of which would be cross-referenced with other aspects of users’ lives.
Anthropic’s management informed Hegseth’s team that this crossed a line, leading to the deal’s collapse. Shortly after, Hegseth ordered US military contractors, suppliers, and partners to cease business with Anthropic. The list of companies collaborating with the US military is extensive, including Amazon, which provides most of Anthropic’s computing infrastructure.
The US Department of Defense did not respond to requests for comments. A spokesperson for Anthropic referred reporters to the company’s statement regarding Hegseth’s remarks.
Anthropic’s Concerns About Autonomous Weapons
The Atlantic cited insiders who indicated that there were disagreements between Anthropic and the Pentagon regarding autonomous weapons. Autonomous weapons are machines that can autonomously select and attack targets without human final decision-making. The US military has been developing such systems for years, with $13.4 billion allocated for this purpose in the 2026 fiscal year. These systems cover a wide range, from individual drones to swarms capable of coordinated operations in the air and sea.
Anthropic does not oppose these types of weapons. In fact, the company has proactively suggested direct collaboration with the Pentagon to improve their reliability. Just as autonomous vehicles can be safer than human drivers in certain situations, killer drones may one day be more precise than human operators, reducing the likelihood of collateral damage.
However, Anthropic’s leadership believes their AI has not yet reached that level. They worry that these models could lead to machines firing indiscriminately or inaccurately, endangering civilians and even US military personnel.
During negotiations, a solution was proposed: if the Pentagon committed to keeping AI technology in the cloud rather than applying it directly to weapons, it might resolve the deadlock. In other words, these models could be placed outside the so-called edge systems—whether those are drones or other autonomous weapons. They could synthesize intelligence before action but would not participate in any lethal decision-making. Thus, AI would not be held accountable for fatal errors caused by drones.
However, Anthropic was not satisfied with this proposal. The company believes that in modern military AI architecture, the boundary between cloud and edge has become increasingly blurred. It is more of a gradient than a barrier. Drones on the battlefield can now be collaboratively controlled through a network that includes cloud data centers. Although drones are designed to operate independently, the US military will always strive to keep them connected to the most powerful models in the cloud—better connectivity means smarter machines.
Anthropic’s Discontent with Cloud Deployment Solutions
In fact, the Pentagon has been working hard to leverage cloud computing more effectively. One of the goals of its Joint Warfighting Cloud Capability program is to push computing resources closer to the battlefield.
AI might reside on Amazon servers in Virginia rather than in overseas war zones, but from an ethical standpoint, there is little difference if it is to make battlefield decisions.
An insider close to the negotiations revealed that Anthropic ultimately abandoned the idea of using cloud computing to solve the problem, as they did not analyze this solution further.
Anthropic’s leadership may have hoped that other AI companies would also uphold similar positions. Earlier this week, they had reason to believe OpenAI would do so. CEO Sam Altman had stated that, like Anthropic, OpenAI would refuse to use its models for autonomous weapon systems.
However, at the same time Altman made these statements, he was negotiating a new agreement with the Pentagon. This agreement was announced just hours after Anthropic’s negotiations collapsed. Altman posted three identical tweets, announcing that OpenAI had reached an agreement with the Pentagon to deploy its models on classified networks.
Faced with accusations of “backstabbing Anthropic” and “rapidly capitulating,” OpenAI released a statement the next day, claiming the contract included “three red lines”—prohibiting large-scale domestic surveillance, prohibiting use for autonomous weapon systems, and prohibiting high-risk automated decision-making, emphasizing that its agreement was “more robust” than Anthropic’s previous proposal and that the company’s AI would only be deployed in the cloud.
However, the public was not convinced. Some presented the terms to AI analysis, revealing that phrases like “all lawful purposes” were vaguely defined, raising concerns that the “red lines” could quickly disappear. OpenAI employees might be eager to know if circumstances have changed since Altman’s initial support for Anthropic.
As of the afternoon of March 1, nearly a hundred OpenAI employees had signed an open letter expressing their alignment with Anthropic’s stance on domestic surveillance and autonomous weapons. If Altman had met with employees in the office on Monday, he might have needed to explain why a proposal that Anthropic firmly rejected was so appealing to him.
Conclusion
The disagreements between Anthropic and the US Pentagon touch on a deeper question—who is responsible when AI is deployed on the battlefield? The Pentagon’s simultaneous ban on Anthropic while continuing to use its models for airstrikes reveals the dilemma of technology once delivered to the US government becoming difficult to control by the company, a concern that is particularly pronounced in the context of the militarization of AI in the US.
For Silicon Valley, this incident serves as a mirror. Some uphold their principles and prefer to forgo contracts, while others flip from support to signing in just 12 hours. Whether AI companies can truly maintain their “red lines” is a matter of concern for the industry and society at large.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.