What Lies Behind Huawei’s Ascend 910C AI Chip? Performance, Supply Chain, and Strategic Impact

This article translates and analyzes Lennart Heim’s deep dive into Huawei’s Ascend 910C AI accelerator, covering its dual‑chip architecture, packaging trade‑offs, performance versus NVIDIA’s H100 and upcoming B200, mysterious supply‑chain origins, and the broader strategic implications for China’s AI competition.

Open Source Linux
Open Source Linux
Open Source Linux
What Lies Behind Huawei’s Ascend 910C AI Chip? Performance, Supply Chain, and Strategic Impact

Technical composition: clever "double‑chip" design

The Ascend 910C does not introduce a brand‑new architecture; instead it combines two existing 910B chips using advanced packaging, creating a sophisticated "fusion" that leverages mature processes to boost performance without the cost of a ground‑up redesign.

Packaging trade‑off: balancing performance and cost

Huawei chose a relatively mature packaging solution—placing two 910B dies on separate silicon interposers and connecting them with an organic substrate—rather than pursuing cutting‑edge CoWoS or Foveros technologies. This results in inter‑chip bandwidth that is 10‑20 times lower than NVIDIA’s most advanced packages, limiting data exchange efficiency but reducing cost, improving yield, and accelerating volume production.

Performance and specifications: the "80%" gap and catching up

Objective assessment: roughly 80% of H100

Heim estimates the 910C can deliver about 800 TFLOPS of FP16 compute and roughly 3.2 TB/s memory bandwidth, which is approximately 80 % of NVIDIA’s 2022 H100. However, its logical die area is about 60 % larger, indicating lower architectural efficiency.

Generational gap: facing NVIDIA’s B200

Compute performance: about three times lower than B200.

Memory bandwidth: roughly 2.5 times lower, even assuming HBM2E.

Energy efficiency: noticeably behind the latest B200.

By 2025, the performance gap could widen further as B200 chips become mainstream.

Supply chain and production: the mysterious source

"TSMC covert stockpiling"?

Heim speculates that Huawei may have secured up to three million 7 nm Ascend dies from TSMC before export controls tightened, and possibly large quantities of HBM2E memory from Samsung. This could enable the production of around 1.4 million 910C accelerators, equating to the AI compute of roughly one million NVIDIA H100 chips.

Domestic manufacturing prospects

While Huawei likely has the capability to fabricate 7 nm chips like the 910B/910C in China, large‑scale, high‑yield production remains uncertain, and most 910C units on the market may still originate from non‑regular TSMC channels.

Strategic significance and global AI competition

Performance gap is real, but strategic impact is huge

Despite the 10‑20× compute advantage held by the West, China can leverage its ability to concentrate resources, focusing on AI inference and specific industry applications (smart cities, transportation, manufacturing, security). This differentiated approach could allow China to achieve “local” leadership even while overall compute capacity lags.

"Inference first, application breakout" outlook

Prioritizing inference chips and platforms may let China deliver scalable AI solutions in targeted sectors, offsetting disadvantages in large‑scale pre‑training.

Conclusion and outlook

"80% performance" carries strategic weight

The Ascend 910C is not a flagship in raw specs, but its emergence under export‑control pressure demonstrates China’s resilience and strategic intent. If Heim’s supply‑chain conjecture holds, the chip’s collective compute power could become a significant lever in the global AI landscape, even if individual cards fall short of the very top.

Future challenges

Continued investment in advanced process R&D, AI infrastructure, and “defensive AI” technologies will be essential for China to narrow the compute gap and sustain a competitive edge.

Performance AnalysisAI competitionAI chipSemiconductorHuaweiAscend 910C
Open Source Linux
Written by

Open Source Linux

Focused on sharing Linux/Unix content, covering fundamentals, system development, network programming, automation/operations, cloud computing, and related professional knowledge.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.