🤖 AI Summary
A systematic, longitudinal analysis of AI supercomputing infrastructure has been lacking. Method: We construct an authoritative dataset covering 500 global AI supercomputers deployed between 2019 and 2025, enabling multidimensional temporal analysis of performance (FLOP/s), power consumption, acquisition cost, ownership, and geographic distribution. Contribution/Results: We empirically reveal— for the first time—an exponential growth pattern: AI compute doubles every nine months, while hardware cost and power draw double annually. We quantify a historic shift in deployment leadership from research institutions to industry. Geographically, the U.S. accounts for 75% of global AI supercomputing capacity, China for 15%. Extrapolating current trends, we project that by 2030, the peak performance of a single AI supercomputer will reach 2×10²² FLOP/s, with power demand exceeding 9 GW. These findings provide empirical foundations for evidence-based policymaking on AI infrastructure development and integrated energy planning.
📝 Abstract
Frontier AI development relies on powerful AI supercomputers, yet analysis of these systems is limited. We create a dataset of 500 AI supercomputers from 2019 to 2025 and analyze key trends in performance, power needs, hardware cost, ownership, and global distribution. We find that the computational performance of AI supercomputers has doubled every nine months, while hardware acquisition cost and power needs both doubled every year. The leading system in March 2025, xAI's Colossus, used 200,000 AI chips, had a hardware cost of $7B, and required 300 MW of power, as much as 250,000 households. As AI supercomputers evolved from tools for science to industrial machines, companies rapidly expanded their share of total AI supercomputer performance, while the share of governments and academia diminished. Globally, the United States accounts for about 75% of total performance in our dataset, with China in second place at 15%. If the observed trends continue, the leading AI supercomputer in 2030 will achieve $2 imes10^{22}$ 16-bit FLOP/s, use two million AI chips, have a hardware cost of $200 billion, and require 9 GW of power. Our analysis provides visibility into the AI supercomputer landscape, allowing policymakers to assess key AI trends like resource needs, ownership, and national competitiveness.