Overview of High Bandwidth Memory (HBM) Technology and Market Trends
The article explains the evolution, technical specifications, and packaging methods of High Bandwidth Memory (HBM) from HBM1 to HBM3E, highlights its dominant role in AI servers, and analyzes market share and growth forecasts for HBM products through 2026.
HBM (High Bandwidth Memory) stacks multiple DRAM dies vertically, connecting each die to a logic die via TSV (Through‑Silicon Via) technology, allowing 8‑ or 12‑layer packages in a compact form factor that delivers high bandwidth and fast data transfer, making it the mainstream solution for AI server GPU memory.
The latest extension, HBM3E, offers up to 8 Gbps transfer speed and 16 GB capacity, first released by SK Hynix and expected to mass‑produce in 2024.
HBM is primarily used in AI servers; the newest generation HBM3E is integrated into Nvidia's 2023 H200 GPU. According to TrendForce, AI server shipments were 860,000 units in 2022 and are projected to exceed 2 million units by 2026, with a compound annual growth rate of 29%.
The surge in AI server shipments drives explosive demand for HBM, and as server HBM capacity increases, the market size is estimated to reach about US$15 billion by 2025, growing at over 50% annually.
HBM supply is concentrated among three major memory manufacturers: SK Hynix, Samsung, and Micron. TrendForce forecasts SK Hynix’s market share at 53% in 2023, Samsung at 38%, and Micron at 9%.
Key packaging technologies for HBM include CoWoS (Chip‑on‑Wafer‑on‑Substrate) and TSV. CoWoS integrates DRAM dies on an interposer and connects them to the substrate, reducing interconnect length and enabling higher data rates; it is widely used in Nvidia’s A100, GH200, etc. TSV creates vertical connections through the silicon wafer, allowing multi‑die stacking with internal interconnects.
HBM’s high bandwidth, low power consumption, and small footprint make it widely adopted in high‑performance AI servers, starting with the 2016 NVP100 GPU (HBM2) and progressing through V100, A100, H100, and the latest H200.
SK Hynix leads the HBM market, having partnered with AMD to launch the world’s first HBM and being the first to supply the new HBM3E, primarily to Nvidia, while Samsung supplies other cloud providers.
For further reading, the article lists numerous related reports and analyses on AI compute, GPU technology, and memory systems.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.