Why OpenAI Has Lost Its Mission: A Deep Dive into Recent Decisions
The article analyzes OpenAI's recent strategic shifts, contrasting its declining product focus and safety commitments with Anthropic's focused growth, using revenue data, internal memos, and industry reports to argue that the company is now driven by deal‑making rather than its original AI mission.
Battlefield
Anthropic reported $30 billion annualized revenue in 2024, growing ten‑fold each year and adding $11 billion in the last month [1]. Its enterprise customers rose from 500 to 1,000 in two months [2], with each paying $211 per active user, whereas OpenAI’s weekly active users generate only $25 per user, indicating an eight‑fold lower monetization efficiency.
Claude Code, Anthropic’s coding model launched in May 2025, generated $2.5 billion annualized revenue in nine months and accounts for 4 % of public GitHub commits, projected to reach 20 % by year‑end [4]. Developers prefer Claude Code for its experience despite Gemini 3’s higher benchmark scores, suggesting that developer experience—not raw model performance—creates a strong moat [5].
Anthropic’s valuation stands at $380 billion, less than half of OpenAI’s $8.52 trillion valuation despite higher growth, implying a market correction may follow [7].
Transaction
OpenAI’s recent actions—Code Red memo, massive product cuts, abrupt shutdown of Sora and termination of a $1 billion Disney contract, Pentagon deal, $122 billion financing, and a lengthy New Yorker profile—appear unrelated but collectively point to a decision logic driven by transactions rather than product or mission.
Code Red was triggered by Gemini 3 surpassing ChatGPT and Claude Code’s surge in developer reputation, prompting a pause on ads, shopping, health agents, and other projects. The memo asks who approved these side‑quests and how many resources they consumed, highlighting a dilution of core ChatGPT competitiveness.
On March 17, internal email from app CEO Fidji Simo announced the cancellation of projects such as Sora, Atlas browser, Jony Ive hardware collaborations, and adult content, retaining only coding and enterprise services. The author likens this to a student who spent years learning multiple arts only to focus on piano after a peer excels in it.
On March 24, OpenAI simultaneously closed Sora, terminated the Disney deal, raised $10 billion, and changed Simo’s title to “AGI Deployment CEO.” Sora’s downloads fell from 4.8 million to 1.1 million, and a commentator noted that Sora never won in any niche.
The pattern repeats: large announcements to shape narrative and secure financing, followed by rapid abandonment when data fails, then immediate launch of the next big story—behaviour the author describes as a trader’s approach.
Soul
A New Yorker long‑form profile of Sam Altman, based on internal memos and over 100 interviews, identifies a recurring pattern of promise, breach, and denial throughout his tenure, suggesting a systemic behavioural issue.
Former chief scientist Ilya Sutskever sent a 70‑page confidential memo accusing Altman of lying, while Anthropic founder Dario Amodei’s private notes recount Altman’s secretive contract terms with Microsoft and unfounded accusations against board members, reinforcing the view that “the problem is Sam himself.”
Board member quotes describe Altman as simultaneously seeking approval and showing near‑antisocial indifference to the consequences of deception, likening him to figures such as Madoff or SBF.
The author argues that when a CEO’s core skill is deal‑making, the entire organization aligns with that ability, leading to over‑promising on compute for safety teams, under‑delivering, and repeatedly reshaping the company’s direction.
Dream
OpenAI’s original mission was “to ensure artificial general intelligence benefits all of humanity,” a broad but clear safety‑first goal.
Recent actions—minimal safety compute allocation, dismissive responses to existential‑safety queries, and a focus on financing, nuclear‑fusion, and super‑app ambitions—suggest the mission has become vague and unimplemented.
The author concludes that without a concrete mission, decisions become opportunistic, leading to resource depletion and a lack of clear purpose, as evidenced by OpenAI’s fragmented product strategy and the rise of Anthropic’s focused enterprise offerings.
Old Zhang's AI Learning
AI practitioner specializing in large-model evaluation and on-premise deployment, agents, AI programming, Vibe Coding, general AI, and broader tech trends, with daily original technical articles.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
