Arm’s 2025 C1 & G1 Architectures: Performance, Power, and AI Breakthroughs
Arm’s 2025 launch replaces the X and A series with the C1 CPU family and G1 GPU, introducing Armv9.3‑A cores, new branding for different markets, a Lumex CSS platform, and the SME2 matrix extension that together boost performance, cut power consumption, and dramatically improve AI capabilities.
In September 2025 Arm announced its newest processor architecture, abandoning the previous X‑series and A‑series naming in favor of the C1 series (C1‑Ultra, C1‑Premium, C1‑Pro, C1‑Nano) built on the Armv9.3‑A micro‑architecture, and introduced a new GPU architecture called Mali G1.
Arm also created market‑specific brand names: Neoverse for servers, Zena for automotive, Lumex for mobile devices, Niva for personal computers, and Orbis for IoT. The new PC brand Niva is expected to improve Windows support, and Qualcomm’s Snapdragon X series processors may soon run Arm‑based Windows devices.
The mobile‑focused C1 series and G1 GPU belong to the Lumex CSS platform, which bundles CPU, GPU, system IP, 3 nm process implementations, and ecosystem support such as pre‑silicon guidance, Android 16 compatibility, and SME2 extensions to accelerate chip design for manufacturers.
C1‑Ultra, the flagship core, shows roughly a 12 % IPC increase over the previous Cortex‑X925, a 25 % peak‑performance boost, and a 28 % power reduction at equal performance while staying on a 3 nm process. Its clock can reach 4.1 GHz or higher, L1 data cache grows from 64 KB to 128 KB, and branch‑prediction accuracy is improved by expanding the predictor history. L1 instruction TLB bandwidth is increased by 50 %.
C1‑Premium reduces area by about 35 % compared with Ultra by trimming the vector unit and L2 cache, delivering performance comparable to Cortex‑X4.
C1‑Pro is a high‑performance core that improves gaming performance by 16 % over the previous generation and inherits the efficient A725 design. It enhances branch‑prediction throughput, enlarges the L1 instruction TLB by 50 %, lowers branch‑prediction power, widens L1 data‑cache bandwidth, optimizes L2 TLB latency, adds an indirect predictor, and improves prefetching. At similar performance it offers an 11 % performance gain and a 26 % power reduction.
C1‑Nano is a low‑power core targeting the next‑generation A520. It delivers a 26 % efficiency improvement, a 5.5 % performance increase with less than 2 % area growth, and uses a decoupled prediction and fetch pipeline to boost instruction prefetch.
The new C1‑DSU replaces the DSU‑120, saving about 11 % power and supporting AI workloads with SME2. It reduces area, adds Quick‑Nap L3 support for faster wake‑up, and can connect up to 14 cores, supporting SME2 in all configurations except the minimal 2‑core setup.
SME2 (Scalable Matrix Extension 2) is the second‑generation matrix‑extension instruction set designed for AI/ML workloads. It introduces multi‑vector instructions, dynamic de‑quantization, and a variable‑length register file (128‑2048 bit), delivering up to 5× AI performance and 3× energy‑efficiency gains. Developers can adopt SME2 with minimal code changes using Arm’s KleidiAI SDK or C intrinsics.
2025 Arm processors use the new C1 naming, with the G1 GPU for mobile.
The CPU family includes C1‑Ultra, C1‑Premium, C1‑Pro, and C1‑Nano.
C1‑Ultra offers a 25 % peak‑performance increase, 12 % IPC gain, and 28 % power reduction.
C1‑Premium is a smaller, secondary‑flagship core comparable to Cortex‑X4.
C1‑Pro upgrades the A725 core with 11 % performance gain and 26 % power reduction at similar performance.
C1‑Nano improves efficiency by 26 % over the A520.
C1‑DSU upgrades DSU‑120, saving 11 % power and adding AI‑friendly SME2 support.
All C1 cores fully support the SME2 extension, dramatically boosting AI matrix‑operation performance.
OPPO Kernel Craftsman
Sharing Linux kernel-related cutting-edge technology, technical articles, technical news, and curated tutorials
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
