Architects' Tech Alliance
Architects' Tech Alliance
Nov 6, 2025 · Artificial Intelligence

Inside scaleX640: How China’s First 640‑Card Supernode Redefines AI Compute

The scaleX640 supernode, unveiled at the Wuzhen World Internet Conference, packs 640 AI accelerators into a single rack, delivering unprecedented compute density, energy efficiency, open ecosystem compatibility, and reliability features that enable massive AI model training and inference at scale.

AI hardwareenergy efficiencyhigh performance computing
0 likes · 4 min read
Inside scaleX640: How China’s First 640‑Card Supernode Redefines AI Compute
Architects' Tech Alliance
Architects' Tech Alliance
Jul 24, 2025 · Artificial Intelligence

Inside Huawei’s CloudMatrix384: How a 384‑NPU AI Supernode Achieves Sub‑Microsecond Latency

The article details Huawei’s CloudMatrix384 AI supernode, describing its 384 Ascend 910C NPUs, 192 Kunpeng CPUs, ultra‑high‑bandwidth UB network, three complementary network planes (UB, RDMA, VPC), and the non‑blocking topology that enables sub‑microsecond inter‑node latency across a 16‑rack deployment.

AI hardwareHuaweiRDMA
0 likes · 9 min read
Inside Huawei’s CloudMatrix384: How a 384‑NPU AI Supernode Achieves Sub‑Microsecond Latency
Baidu Intelligent Cloud Tech Hub
Baidu Intelligent Cloud Tech Hub
May 23, 2025 · Artificial Intelligence

How Baidu’s Kunlun Supernode Redefines AI Compute Density and Performance

This article explains how Baidu’s Kunlun supernode, built on high‑density liquid‑cooled cabinets and a modular 1U 4‑card design, breaks traditional 8‑card limits, boosts compute density four‑fold, improves power and cooling efficiency, and provides a scalable foundation for large‑model AI training and inference.

AI infrastructureGPU ClusterLiquid cooling
0 likes · 13 min read
How Baidu’s Kunlun Supernode Redefines AI Compute Density and Performance
AI Cyberspace
AI Cyberspace
May 20, 2025 · Artificial Intelligence

Why SuperNode and SuperPOD Are Critical for Scaling AI Models

This article explains the scaling laws behind large language models, the explosive growth of model sizes and compute demands, and why modern AI infrastructure must adopt SuperNode and SuperPOD architectures that combine high‑bandwidth Scale‑Up networks with flexible Scale‑Out networking to overcome bandwidth, latency, and power challenges.

AI scalingDistributed TrainingSuperPod
0 likes · 42 min read
Why SuperNode and SuperPOD Are Critical for Scaling AI Models