Can Huang’s Law Double AI Performance Every Two Years? NVIDIA GTC 2020 Insights

At NVIDIA’s GTC China 2020, chief scientist Bill Dally highlighted the “Huang’s Law” predicting GPU-driven AI performance to double biennially, introduced projects like MAGNet, optical interconnects, and the Legate programming model, and discussed the broader implications for AI ecosystem development and industry adoption.

Programmer DD
Programmer DD
Programmer DD
Can Huang’s Law Double AI Performance Every Two Years? NVIDIA GTC 2020 Insights

Preface

During a casual meeting with a friend who works in banking, the conversation turned to the banks’ digital transformation and the shortage of GPU resources across departments, which reminded me of NVIDIA founder Jensen Huang’s prediction known as Huang’s Law.

At the NVIDIA GTC China 2020 online conference, NVIDIA chief scientist Bill Dally emphasized that GPU performance for AI will continue to double each year, drawing a parallel to the historic Moore’s Law. He argued that Huang’s Law could remain a reliable indicator for AI hardware progress for a long time.

Moore’s Law

Moore’s Law, proposed by Intel co‑founder Gordon Moore in the 1960s, observed that the number of transistors on a chip roughly doubles every two years, leading to a corresponding increase in computing performance.

Huang’s Law

Similar to Moore’s Law, Huang’s Law predicts that AI‑focused chip performance will double roughly every two years, driven by continuous improvements in both hardware and software.

The three projects highlighted by Bill Dally are:

MAGNet

MAGNet is an NVIDIA‑developed tool that coordinates data flow across devices, minimizing transmission overhead and achieving inference performance of 100 TOPS/W in simulations—an order of magnitude higher than current commercial chips.

Optical Link

This project, a collaboration between NVIDIA and Columbia University, aims to replace electrical interconnects with high‑speed optical links. The goal is to achieve terabit‑per‑second data transmission on a millimeter‑scale chip, potentially increasing interconnect density tenfold.

Bill Dally also showcased the latest NVIDIA DGX system that integrates over 160 GPUs using this optical technology.

Legate

Legate is a new programming system prototype that enables developers to run single‑GPU code on systems of any scale, from a single GPU to massive supercomputers like Selene, which houses thousands of GPUs.

Dally stressed that GPUs are the foundation of Huang’s Law, and their continued success is crucial for the law’s validity.

Realizing Huang’s Law will require industry‑wide demand for AI capabilities and bold innovation. NVIDIA is actively building an AI ecosystem, showcasing twelve AI startups from its accelerator program that span conversational AI, smart healthcare, consumer internet, deep‑learning acceleration, autonomous machines, and self‑driving cars.

Whether Huang’s Law will become as influential as Moore’s Law remains to be seen, but NVIDIA’s efforts suggest a strong commitment to driving AI performance forward.

GPUNVIDIAoptical interconnectAI performanceHuang's LawLegateMaGNet
Programmer DD
Written by

Programmer DD

A tinkering programmer and author of "Spring Cloud Microservices in Action"

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.