OpenAI Signs the Military Contract Anthropic Rejected—Same Day, Same Terms?
On Feb 28, 2026 OpenAI signed a deal with the U.S. Department of War to deploy AI on classified military networks, while Anthropic publicly refused the same contract and threatened legal action, exposing stark contradictions in their descriptions of the agreement and sparking a heated AI militarization debate.
Background
The story begins with the U.S. Department of War (formerly the Department of Defense) negotiating with major AI firms to deploy models on classified military networks, demanding that the models be usable for "all lawful purposes," including autonomous weapons and domestic surveillance.
Anthropic's refusal
On Feb 26, Anthropic CEO Dario Amodei announced two non‑negotiable red lines: AI must not autonomously decide to use force, and it must not be used for large‑scale domestic surveillance. When the contract included those clauses, Anthropic rejected it, and the department labeled the company a "supply‑chain risk," effectively threatening to exclude it from government work.
OpenAI's agreement
Later that night, Sam Altman posted a cautious tweet saying OpenAI also adheres to the two principles and has written them into the agreement, implying a different wording satisfied the department. The next day OpenAI issued a more detailed statement, reiterating the same principles and disputing Anthropic's risk label.
Anthropic's version
Anthropic’s public statement points to a contract clause that grants the department the right to use AI for any lawful purpose, which they view as a blank check because law can evolve faster than technology.
The two red lines
1. Prohibit large‑scale domestic surveillance. The term "large‑scale" is vague; definitions of "domestic," "border," or "intelligence systems" are unclear, making enforcement difficult.
2. Humans retain final responsibility for the use of force (no fully autonomous weapons). This is the most contentious issue in AI militarization; the department’s stance determines whether the AI will be an advisory tool or an autonomous system.
Academic commentary
Commentators such as Ethan Mollick and cognitive scientist Gary Marcus highlighted the broader risk that AI’s destructive potential is growing, noting the lack of transparency and the volatile pattern of government‑industry interactions.
Implications
The dispute raises three core questions for AI companies:
Can AI firms realistically refuse government contracts? Anthropic shows refusal is possible but carries severe supply‑chain risk.
Is “conditional cooperation” a compromise that puts firms at a competitive disadvantage? OpenAI argues participation allows them to embed constraints, but it may reward less principled partners.
Who will enforce the contractual constraints when the systems operate on classified networks? Visibility into actual usage is limited, making compliance verification challenging.
The episode marks the first public split in the AI industry over militarization, with community reactions ranging from #QuitGPT calls to suggestions to switch to alternative models.
The outcome depends on whether the Department of War will extend the same terms to other companies, a point OpenAI explicitly requested in its statement.
ShiZhen AI
Tech blogger with over 10 years of experience at leading tech firms, AI efficiency and delivery expert focusing on AI productivity. Covers tech gadgets, AI-driven efficiency, and leisure— AI leisure community. 🛰 szzdzhp001
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
