1. Home
  2. Technology
  3. Pentagon Threatens to Blacklist Anthropic Over Military Use of Claude AI

Pentagon Threatens to Blacklist Anthropic Over Military Use of Claude AI


Updated: 2/26/2026Our Bureau

Pentagon Threatens to Blacklist Anthropic Over Military Use of Claude AI

The United States Department of Defense is locked in a high-stakes confrontation with artificial intelligence firm Anthropic over how its Claude AI system can be used in military operations, raising broader questions about control, ethics and national security in the AI era.

Defense Secretary Pete Hegseth has warned the company that it could be removed from the Pentagon’s supply chain if it refuses to ease restrictions on military applications of its flagship AI model. According to Reuters, Anthropic has been given until Friday at 5 p.m. to respond to the department’s demands.

The dispute, which has been brewing for months, could jeopardize contracts worth up to $200 million.

What Triggered the Standoff?

At the heart of the conflict is a fundamental disagreement over who ultimately governs how Claude AI is deployed: the Pentagon or Anthropic.

According to CBS News, tensions escalated after the U.S. military reportedly used Claude during an operation aimed at capturing former Venezuelan President Nicolás Maduro last month.

Anthropic said it “has not discussed the use of Claude for specific operations” with the Defense Department. People familiar with the matter told CBS that the company has repeatedly insisted on maintaining strict guardrails including prohibiting the use of Claude for mass surveillance of U.S. citizens and for fully autonomous targeting decisions without human oversight.

Pentagon Push for Fewer Restrictions

The Defense Department has reportedly urged major AI firms, including Anthropic and OpenAI, to make their systems available on classified government networks with fewer user restrictions than those applied in civilian contexts.

During a recent meeting with Anthropic CEO Dario Amodei, Hegseth outlined potential consequences if the company refuses to comply. According to Reuters, options discussed include designating Anthropic as a supply-chain risk or invoking legal authorities that could compel the company to alter its policies.

Anthropic, in a statement following the meeting, said it continued “good-faith conversations” to ensure it can support national security missions “in line with what our models can reliably and responsibly do.”

Ethics, AI and National Security

The standoff underscores a growing tension between Silicon Valley’s AI safety commitments and Washington’s security imperatives.

Anthropic has positioned itself as a safety-focused AI company, embedding usage limits into Claude to prevent misuse. One core restriction reportedly bars the model from being used for final military targeting decisions without human involvement.

The debate intensified earlier this month when senior safety researcher Mrinank Sharma announced his departure from Anthropic, citing concerns about global instability and interconnected crises.

The Pentagon’s ultimatum marks a significant escalation in what had been private negotiations. If enforced, blacklisting Anthropic could send a strong signal to other AI firms contracting with the federal government.

Broader Implications

The outcome of the dispute may set a precedent for how AI companies balance commercial government contracts with ethical constraints on advanced systems.

For the Pentagon, access to cutting-edge AI tools is increasingly central to modern warfare, intelligence analysis and logistics. For AI developers, however, loosening safeguards could undermine public trust and internal safety principles.

As artificial intelligence becomes more deeply embedded in defense infrastructure, the confrontation between the Pentagon and Anthropic highlights a defining question of the AI age: how far should automation go in matters of war and who gets to decide?