What Is Computing Power and Why It Drives AI, Cloud, and Blockchain
This article explains the concept of computing power, its measurement units, classifications into general and specialized types, the role of CPUs, GPUs, FPGA and ASIC chips, and how it underpins AI model training, blockchain mining, and scientific research.
What Is Computing Power
Computing power, literally "computing ability", refers to the capability of a computing system—such as a computer, server, or data center—to process information and perform calculations.
Our brains act as a powerful computing engine, constantly performing calculations, but human mental arithmetic is relatively weak compared to computers. A high‑performance computer is like a fast‑solving high‑school student, while a low‑power machine resembles a slower elementary student.
Computing power is usually measured in TFLOPS (trillion floating‑point operations per second), where 1 PFLOPS equals 1,000 TFLOPS. For example, training the DeepSeek‑V3‑Base (671 B) model consumes about 2.778 M GPU‑hours, calculated by the formula:
Required GPUs = Total FLOP ÷ (Single‑GPU power × Training time × Efficiency factor)
Classification of Computing Power
We typically divide computing power into two categories: general‑purpose and specialized.
General‑purpose computing power: versatile like a Swiss army knife, able to handle diverse and unpredictable tasks.
Specialized computing power: tailored like a kitchen whisk for specific types of calculations, offering high efficiency in its domain.
Chips that provide computing power also fall into general and specialized types. General‑purpose chips include x86 CPUs, which are flexible but consume more power. Specialized chips are mainly FPGA (field‑programmable gate arrays) and ASIC (application‑specific integrated circuits). FPGA can be reprogrammed to change internal logic for dedicated tasks, while ASICs are fixed‑function chips optimized for low power consumption.
Smart Computing and AI
Artificial intelligence is a major consumer of computing power, alongside algorithms and data. AI workloads involve massive matrix and vector operations, making GPUs and specialized chips the preferred hardware. GPUs, originally designed for graphics, contain many parallel cores that excel at handling compute‑intensive, highly parallel tasks such as AI training.
Because of the soaring demand for AI computation, many “intelligent computing” data centers have been built to host large‑scale GPU clusters.
Application Scenarios and Typical Cases
Large‑Model Training
Training large models requires massive datasets and billions of parameters; for instance, GPT‑3 has 175 billion parameters. These models typically use the Transformer architecture, which benefits from parallel computation capabilities of modern GPUs.
Blockchain and Energy
By 2025, Bitcoin mining is projected to reach 831 EH/s (exahashes per second), with miners relocating to regions with ultra‑low electricity costs to achieve energy arbitrage.
Scientific Research and Industrial Manufacturing
China’s Tianhe supercomputers accelerate disease research in genomics.
Frontier supercomputer simulates nuclear reactor interiors to improve energy efficiency.
Conclusion
Computing power is a critical resource, yet many challenges remain, such as low utilization rates (often only 10‑15% of distributed small‑scale compute) and uneven distribution. Since Moore’s law has slowed since 2015, the growth of compute efficiency lags behind data growth, highlighting the need for better resource scheduling and network support to meet future computing demands.
Efficient Ops
This public account is maintained by Xiaotianguo and friends, regularly publishing widely-read original technical articles. We focus on operations transformation and accompany you throughout your operations career, growing together happily.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
