How Jack Dongarra’s Linpack Revolutionized Supercomputing and Earned a Turing Award
Jack Dongarra, a pioneering computer scientist, created the Linpack library and benchmark that enabled software to scale from laptops to exaflop supercomputers, earning him the 2022 ACM A.M. Turing Award and shaping modern high‑performance and cloud computing.
2022 Turing Award Winner
In June 2022, the Association for Computing Machinery (ACM) announced that 72‑year‑old Jack Dongarra received the 2022 A.M. Turing Award for his contributions to numerical algorithms and software libraries.
Dongarra’s work on numerical algorithms and open‑source libraries (such as LINPACK, BLAS, LAPACK, ScaLAPACK, PLASMA, MAGMA, and SLATE) has kept software in step with exponentially growing hardware for over 40 years, from personal laptops to the world’s most powerful exaflop supercomputers.
He helped define the exaflop (10 18 floating‑point operations per second) performance level, which today powers the fastest scientific simulations.
Dongarra earned a B.S. in Mathematics from Illinois State University, an M.S. in Computer Science from Illinois Institute of Technology, and a Ph.D. in Applied Mathematics from the University of New Mexico.
Honors and Awards
Fellow, American Association for the Advancement of Science (1995)
Fellow, IEEE (1999)
Elected to the National Academy of Engineering (2001)
ACM Fellow (2001)
IEEE Sid Fernbach Award (2003)
IEEE Computer Society Charles Babbage Award (2011)
ACM/IEEE Kennedy Award (2013)
SIAM/ACM Computational Science Award (2019)
Foreign Member, Royal Society (2019)
IEEE Computer Pioneer Award (2020)
Background
Born on July 18, 1950, in Chicago to a Sicilian immigrant family, Dongarra struggled with undiagnosed dyslexia but later combined his love of tinkering with machines and science.
He studied mathematics at Illinois State University, worked part‑time at a pizza shop to pay tuition, and later became a professor of computer science at the University of Tennessee and a researcher at Oak Ridge National Laboratory.
Linpack and Benchmarking
In the late 1970s at Argonne National Laboratory, Dongarra wrote the first line of code for Linpack, a library that enables complex linear‑algebra computations on supercomputers. Linpack became a cornerstone for scientific applications such as weather forecasting, economic modeling, and nuclear simulations.
In the early 1990s, Dongarra and collaborators created the Linpack benchmark to measure supercomputer performance in FLOPS, giving rise to the TOP‑500 list that ranks the world’s fastest machines.
Key Technologies
Automatic Tuning (ATLAS) : Dongarra designed a method to automatically discover near‑optimal algorithm parameters for linear‑algebra kernels, outperforming manually tuned code.
Mixed‑Precision Arithmetic : In a 2006 paper, he showed how 32‑bit floating‑point operations can achieve 64‑bit accuracy, a technique now vital for machine‑learning performance.
Batch Processing : He pioneered the decomposition of large dense matrix computations into independent parallel tasks, widely used in simulation and data analysis.
He also contributed to MPI (Message Passing Interface) and PAPI (Performance API), standards that underpin modern high‑performance and scientific computing.
Supercomputers and Cloud Computing
By the 2000s, the most powerful computers linked thousands of nodes into massive clusters. The rise of cloud services from Amazon, Google, and Microsoft (and in China, Baidu, Alibaba, Tencent) further connected small machines, creating new opportunities for AI training.
Dongarra believes cloud services represent the future of scientific computing, as custom chips from these providers will accelerate AI and other workloads, reducing reliance on traditional, monolithic supercomputers.
He also notes emerging quantum computers could dramatically shift performance benchmarks.
21CTO
21CTO (21CTO.com) offers developers community, training, and services, making it your go‑to learning and service platform.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
