Why GPUs Still Dominate AI Compute and What’s Driving the Next Chip Upgrade

The article analyzes how AI compute centers rely on GPUs and emerging AI chips, examines the booming demand for HBM memory, the scarcity of advanced CoWoS packaging, and the rising need for sophisticated backside power delivery as AI models scale.

Architects' Tech Alliance
Architects' Tech Alliance
Architects' Tech Alliance
Why GPUs Still Dominate AI Compute and What’s Driving the Next Chip Upgrade

The piece originates from the article “AI智算时代:算力芯片加速升级”. It explains that AI compute centers are built on the latest AI theories and advanced architectures, with large AI models driving demand; compute power and algorithms are core, delivered via AI chips, servers, and clusters.

GPUs continue to dominate the compute chip market, while the combination of CPU+GPU remains the most widely used platform. The AI distributed‑computing market is composed of compute chips (55‑75%), memory (10‑20%) and interconnect devices (10‑20%).

Following the explosive popularity of ChatGPT, GPU demand has surged, prompting Nvidia to increase orders for Samsung and SK Hynix HBM3. SK Hynix announced in October 2023 that it has sold its entire 2024 HBM3 and HBM3E production, and Omdia forecasts the global HBM market to reach $2.5 billion by 2025.

Integrating compute and memory faces a capacity bottleneck in advanced packaging. CoWoS (chip‑on‑wafer‑on‑substrate) is the mainstream solution for combining HBM with CPU/GPU. TSMC leads the global CoWoS market; IDC predicts a ~20% supply‑demand gap and that TSMC’s CoWoS capacity will double in 2024, while the 2.5D/3D advanced‑packaging market is expected to grow at a 22% CAGR from 2023‑2028.

AI compute also creates new requirements for efficient power delivery. As integration density increases, power solutions for accelerators become more complex, needing multiple voltage rails and multi‑path inputs, leading to a proliferation of voltage domains.

Major chip manufacturers such as TSMC, Samsung, and Intel are actively developing backside power‑delivery network technologies to provide efficient power for increasingly complex chips, with Intel currently taking a leading position.

Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

industry analysisAI computeGPU dominancebackside power deliveryCoWoS packagingHBM memory
Architects' Tech Alliance
Written by

Architects' Tech Alliance

Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.