TOP500 Supercomputer Rankings (Version 54) – Overview of Leading Systems and Regional Distribution
The 54th TOP500 list released in November 2019 shows China and the United States still dominate in both the number and performance of supercomputers, with the total HPL performance reaching 1.65 exaflops and the entry threshold rising to 1.14 petaflops.
In the 54th edition of the TOP500 list (published November 2019), China and the United States continue to dominate both the number of systems and their performance across categories. The total HPL performance of the 500 systems has risen to 1.65 exaflops, and the entry threshold has increased from 1.02 petaflops (June 2019) to 1.14 petaflops.
Top-ranked Supercomputer Systems
The top‑10 systems have remained largely unchanged; the same ten machines occupy the leading positions as in the previous list.
Summit and Sierra still hold the first two spots. Both are IBM‑built machines that combine Power9 CPUs with NVIDIA Tesla V100 GPUs. Oak Ridge National Laboratory’s Summit leads with an HPL result of 148.6 petaflops.
In second place is Lawrence Livermore National Laboratory’s Sierra, achieving 94.6 petaflops.
Following them is China’s Sunway TaihuLight, rated at 93.0 petaflops. It was developed by the National Research Center of Parallel Computer Engineering and Technology (NRCPC) and is installed at the Wuxi National Supercomputing Center, using only the Sunway SW26010 processor.
Tianhe‑2A (Galaxy‑2A), developed by the National University of Defense Technology (NUDT) and deployed at the Guangzhou National Supercomputing Center, ranks fourth with 61.4 petaflops, powered by Intel Xeon CPUs and Matrix‑2000 accelerators.
Frontera, installed at the Texas Advanced Computing Center (University of Texas), ranks fifth with 23.5 petaflops. It is a Dell C6420 system driven solely by Xeon Platinum processors.
Piz Daint, a Cray XC50 system located at the Swiss National Supercomputing Centre (CSCS) in Lugano, ranks sixth with 21.2 petaflops, making it the most powerful system in Europe.
Trinity, a Cray XC40 system operated jointly by Los Alamos and Sandia National Laboratories, occupies the seventh position with a combined Intel Xeon and Xeon Phi configuration delivering 70.2 petaflops.
In eighth place is Japan’s AI Bridging Cloud Infrastructure (ABCI) at the National Institute of Advanced Industrial Science and Technology (AIST), built by Fujitsu and equipped with Intel Xeon Gold CPUs and NVIDIA Tesla V100 GPUs, achieving 19.9 petaflops.
SuperMUC‑NG, installed at the Leibniz Supercomputing Centre near Munich, ranks ninth with 19.5 petaflops, using Lenovo‑provided Intel Platinum Xeon processors.
Lassen, the tenth system, provides 18.2 petaflops. It is installed at Lawrence Livermore National Laboratory and is a shadow system of Sierra, sharing the same IBM Power9 and NVIDIA V100 GPU architecture.
Among newer entries, AiMOS appears at rank 24 with 8.0 petaflops. Built by IBM and installed at Rensselaer Polytechnic Institute’s Computing Innovation Center, it also uses Power9 CPUs and NVIDIA V100 GPUs.
Regional Breakdown
In China, the number of TOP500 installations has risen to 227 (up from 219 six months earlier), representing 45.6 % of the total systems. The United States maintains a 23.4 % share.
Although the U.S. has only 118 systems, its average performance per system is much higher, accounting for 37.1 % of the total TOP500 performance. China follows with 32.3 % of the performance. The performance gap has narrowed compared with the June 2019 list (U.S. 38.4 %, China 29.9 %).
Japan ranks third with 29 systems, followed by France (18), Germany (16), the Netherlands (15), Ireland (14), and the United Kingdom (11). All other countries have single‑digit counts.
Vendor Share
By installation count, the top three vendors are Lenovo (174 systems), Sugon (71), and Inspur (65). Cray (36) ranks fourth, while HPE (35) is fifth; note that Cray is now part of HPE, so combined they match Sugon’s total.
At the chip level, Intel remains dominant, powering 470 of the 500 systems with Xeon or Xeon Phi processors. IBM is second, present in 14 systems (10 with Power CPUs, 4 with Blue Gene/PowerPC). Only three systems use AMD processors.
Two systems are based on ARM processors: the Astra system at Sandia National Laboratory (Marvell ThunderX2) and Fujitsu’s A64FX prototype, the predecessor of RIKEN’s Fugaku post‑exascale system.
NVIDIA is the leading accelerator supplier, with 136 of the 145 accelerated systems using NVIDIA GPUs (up from 134 in the previous list).
Network Interconnect Share
Ethernet appears in 52 % of TOP500 systems (258 machines), while InfiniBand accounts for 28 % (140 machines). However, InfiniBand‑based machines deliver 40 % of the total performance versus 29 % for Ethernet‑based machines. Custom interconnects are used in only 46 systems, contributing 22 % of performance.
Green500 Results
The Green500 list, which measures energy efficiency, has changed dramatically. The top spot is held by the A64FX prototype supercomputer with 16.9 gigaflops/watt. The second place goes to NA‑1, a Zettascaler machine using PEZY‑SC2 processors (16.3 gigaflops/watt).
Third is IBM’s new AiMOS system, followed by two more IBM machines (Satori at 15.6 gigaflops/watt and Summit at 14.7 gigaflops/watt). Positions five through ten are occupied by AI Bridging Cloud Infrastructure, MareNostrum P9 CTE, TSUBAME 3.0, PANGEA III, and Sierra.
HPCG Results
Based on the High Performance Conjugate Gradient (HPCG) benchmark, Summit and Sierra also lead the TOP500 list, achieving 2.93 HPCG‑petaflops and 1.80 HPCG‑petaflops respectively.
About the TOP500 List
The first TOP500 list was released in June 1993 at a small conference in Germany. The author updated the list in November 1993 to track changes, and the biannual release schedule has continued ever since.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.