Industry Insights 13 min read

Why China Overtook the US on Hugging Face: Inside the 2025 Open‑Source AI Surge

A comprehensive analysis of Hugging Face data reveals how China became the world’s largest monthly downloader of open‑source AI models in 2025, reshaping the global AI ecosystem through rapid growth, shifting geography, evolving model sizes, hardware diversification, and expanding robotics and scientific sub‑communities.

SuanNi
SuanNi
SuanNi
Why China Overtook the US on Hugging Face: Inside the 2025 Open‑Source AI Surge

Overview

The Hugging Face spring‑2026 report maps the current state of the open‑source AI ecosystem, highlighting a dramatic shift in 2025 where China surpassed the United States to become the top source of model downloads on the platform.

Growth Metrics

Platform users grew to 13 million, public models exceeded 2 million, and datasets topped 500 thousand, nearly doubling year‑over‑year. More users are now creating derivative works—fine‑tuning, adapters, benchmarks, and applications—rather than merely consuming models.

Ecosystem Concentration

Despite overall growth, concentration remains high: about half of all model downloads receive fewer than 200 downloads, while the top 200 models (0.01% of the catalog) capture 49.6% of download traffic.

Competitive Landscape

Over 30% of Fortune 500 companies now hold certified Hugging Face accounts. Start‑ups treat open‑source models as default components; major IDEs such as VS Code and Cursor support both open and closed models. Large tech firms—including NVIDIA, Meta, Google, and others—are the most active contributors, creating new repositories at a steady pace.

Geographic Shifts

Historically, the US and China have led contributions, followed by the UK, Germany, and France. In 2025, Chinese models captured 41% of monthly download share, overtaking the US. Independent developers and distributed groups account for roughly half of total downloads, emphasizing a decentralised contributor base.

Model Size Trends

Average model size rose from 827 million parameters in 2023 to 20.8 billion in 2025, driven by quantisation and mixture‑of‑experts architectures. Median size grew modestly from 326 million to 406 million parameters, indicating that small models remain heavily used while large models inflate the mean.

Sovereign AI & Hardware

Open‑source weights enable governments to fine‑tune models on local data within national legal frameworks, reducing reliance on foreign cloud providers. Nations such as South Korea have launched sovereign AI programmes, and Chinese firms are adding support for domestic chips. NVIDIA remains the dominant optimisation target, but AMD support is expanding, and Hugging Face’s Kernel Hub now serves both GPU families.

Robotics & Science Sub‑communities

Robotics emerged as the fastest‑growing Hugging Face sub‑community, with datasets jumping from 1 145 in 2024 to 26 991 in 2025, overtaking text generation. Projects like RoboMIND, LeRobot, and Pollen Robotics provide real‑world trajectories and tools for imitation learning, reinforcement learning, and multimodal robot control. Scientific domains—protein folding, molecular dynamics, drug discovery—also see increasing adoption of open‑source models.

Future Outlook

Geopolitical power is rebalancing: Western organisations seek Chinese alternatives, while US/European initiatives (GPT‑OSS, OLMo, Gemma) aim to provide competitive open‑source options. The continued expansion of robotics and scientific sub‑communities suggests open‑source AI will move beyond language and vision into physical and experimental realms, making interoperability a cornerstone for the next generation of AI agents.

RoboticsAI hardwareChina AIHugging FaceModel ecosystemOpen-source AIAI Market Trends
SuanNi
Written by

SuanNi

A community for AI developers that aggregates large-model development services, models, and compute power.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.