What Anthropic’s New Economic Index Reveals About Claude’s Growing User Base
Anthropic’s March 2026 Economic Index analyzes over two million Claude.ai and API conversations, showing how usage is spreading from high‑skill professional tasks to everyday activities, how model choice varies by task value, and how longer‑time users achieve higher success rates, highlighting emerging AI adoption trends and skill gaps.
Overview of the New Economic Index
The March 2026 report from Anthropic examines 1 million Claude.ai consumer‑side dialogues and 1 million first‑party API developer conversations, using a privacy‑preserving data system to track real‑world usage across economic domains. Compared with the previous report, it adds deep analysis of model‑selection behavior and learning curves.
Usage Scenarios Are Rapidly Diversifying
Programming‑related tasks remain the most common on Claude.ai, accounting for 35% of all dialogues and tied to computer‑science and mathematics occupations. Between November 2025 and February 2026, the top‑10 task types fell from 24% to 19% of total conversations, indicating a broader spread of use cases.
Two forces drive this shift: (1) programming tasks are migrating from the Claude.ai UI to the API, where Claude Code breaks work into smaller API calls labeled as distinct task types, dispersing programming activity across many categories; (2) the structure of usage types is changing, with coursework‑related dialogues dropping from 19% to 12% while personal use rises from 35% to 42%, partly due to academic calendars and a surge of new users registering around February.
On Claude.ai, the top‑10 tasks’ share fell from 24% to 19%, while API task concentration stayed roughly steady between 28% and 33%.
Geographic Convergence and Global Disparities
Within the United States, the AI Usage Index (adjusted for working‑age population) continues to converge, though the pace has slowed: the top‑5 states’ share of per‑capita usage dropped from 30% to 24% between August 2025 and February 2026, and the Gini coefficient is decreasing but will take 5–9 more years to equalize.
Globally, usage is becoming slightly more concentrated: the top‑20 countries now account for 48% of adjusted usage, up from 45%, reflecting digital‑infrastructure gaps, language barriers, and payment‑ability differences.
Model Selection as a Strategic Choice
Claude offers three model tiers—Haiku, Sonnet, and Opus—each balancing cost, speed, and performance. Opus, the most token‑expensive, excels at complex tasks; data show that 55% of computer‑science tasks use Opus versus 45% of education tasks, a 4‑percentage‑point gap.
Technical users tend to switch from the default Sonnet to Opus for performance gains, while efficiency‑focused users keep simple tasks on Sonnet to stay within usage limits.
Long‑Term Users Gain More From Claude
Users registered for at least six months (high‑seniority) show higher collaboration depth: they use Claude 7 percentage points more for work, tackle tasks requiring higher education levels, and have a slightly lower top‑10 task concentration (20.7% vs. 22.2%).
Regression analysis indicates that high‑seniority users enjoy a 5‑point higher raw success rate, which narrows to about 3 points after controlling for task type, and remains roughly 4 points higher even after adjusting for model choice, scenario, language, and country.
Discussion and Implications
The report paints two concurrent trends: Claude is evolving from a niche professional tool into a broad‑based assistant, while the longest‑standing users become more proficient, achieving higher success rates on increasingly challenging tasks.
This dynamic may amplify skill‑biased labor‑market effects: high‑skill workers reap larger AI‑driven wage gains, whereas newer, less‑technical users lower the average performance metrics.
Policymakers should consider expanding AI‑literacy training to mitigate unequal AI benefit distribution, as AI proficiency appears poised to become a critical new skill akin to computer literacy decades ago.
SuanNi
A community for AI developers that aggregates large-model development services, models, and compute power.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
