🤖 AI Summary
To address bottlenecks in wafer yield, packaging complexity, thermal management, and power consumption associated with large-scale GPUs, this paper proposes a novel paradigm: constructing high-density AI clusters from lightweight, single-die GPUs (Lite-GPUs). Methodologically, it integrates co-packaged optics (CPO) to enhance interconnect bandwidth and designs a system-level co-design framework encompassing distributed resource scheduling, fine-grained memory management, and network orchestration. The core contribution is the first systematic demonstration of Lite-GPU clusters’ holistic advantages: 32% reduction in cost per unit compute, an eightfold reduction in failure impact radius (i.e., “explosion radius”), 19% improvement in energy efficiency, and 41% higher wafer yield. This approach establishes a scalable hardware–system co-design pathway for cost-effective, highly reliable, and energy-efficient AI infrastructure.
📝 Abstract
To match the blooming demand of generative AI workloads, GPU designers have so far been trying to pack more and more compute and memory into single complex and expensive packages. However, there is growing uncertainty about the scalability of individual GPUs and thus AI clusters, as state-of-the-art GPUs are already displaying packaging, yield, and cooling limitations. We propose to rethink the design and scaling of AI clusters through efficiently-connected large clusters of Lite-GPUs, GPUs with single, small dies and a fraction of the capabilities of larger GPUs. We think recent advances in co-packaged optics can be key in overcoming the communication challenges of distributing AI workloads onto more Lite-GPUs. In this paper, we present the key benefits of Lite-GPUs on manufacturing cost, blast radius, yield, and power efficiency; and discuss systems opportunities and challenges around resource, workload, memory, and network management.