Why OpenAI’s Secret Pentagon Deal on the Night Anthropic Was Banned Sparks Backlash

On the night President Trump labeled Anthropic a national‑security risk, OpenAI announced a covert agreement with the U.S. Department of War that mirrors Anthropic’s safety red lines but adds conditional language, prompting resignations, criticism, and user protests.

AI Insight Log
AI Insight Log
AI Insight Log
Why OpenAI’s Secret Pentagon Deal on the Night Anthropic Was Banned Sparks Backlash

On the same evening that President Trump signed an executive order designating Anthropic a "national security supply‑chain risk," OpenAI CEO Sam Altman posted a tweet announcing that the company had reached an agreement with the U.S. Department of War to deploy its models on the department’s classified network.

Earlier in February, the U.S. military’s use of Anthropic’s Claude in a Maduro operation sparked a public dispute over AI principles. Anthropic’s CEO Dario Amodei articulated two non‑negotiable red lines: (1) the model must not be used for large‑scale surveillance of a nation’s own citizens, and (2) it must not be employed in fully autonomous weapons, citing the model’s hallucination risk.

The Pentagon, however, demanded that AI partners accept "all lawful uses" without unilateral restrictions, with War Secretary stating he would not allow any company to dictate how the military conducts combat decisions.

"We will not let any company dictate how the Pentagon makes combat decisions," the War Secretary said in a face‑to‑face meeting.

Anthropic refused to compromise, and after the February 27 deadline passed, the company was effectively banned.

In contrast, OpenAI’s agreement, as quoted by Altman, re‑states the same two safety principles and notes that the Department of War has incorporated them into the contract. OpenAI also pledged technical safeguards, on‑site engineering support, and cloud‑only deployment, while urging the Pentagon to extend the same terms to all AI firms.

"Our two most important safety principles are prohibiting mass domestic surveillance and keeping human responsibility over the use of force (including autonomous weapon systems). The Department of War agrees with these principles and has codified them in the agreement," Altman wrote.

The contract’s restriction on autonomous weapons is conditional: it applies only when law, regulation, or DoD policy requires human control. Since current DoD policy does not mandate human approval before lethal force, the clause’s effectiveness hinges on a non‑existent policy, making it weaker than Anthropic’s absolute ban.

Reactions were swift. An alleged OpenAI employee announced resignation on Twitter while sharing Altman’s tweet. Externally, libertarian groups posted "DELETE CHATGPT" posters calling for a boycott, and memes mocked the statement as naïve. Some users cancelled their ChatGPT subscriptions and switched to Claude, arguing that the switch reflects emotional protest rather than substantive product differences.

This episode illustrates how a single wording difference in AI safety commitments can generate divergent interpretations and stir both internal and public backlash.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

OpenAIAI safetyAI policyAnthropicsurveillanceautonomous weaponsPentagon
AI Insight Log
Written by

AI Insight Log

Focused on sharing: AI programming | Agents | Tools

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.