🤖 AI Summary
AI data centers face escalating challenges—including rapid model iteration, surging resource demands, and pronounced hardware heterogeneity—leading to high total cost of ownership (TCO) under conventional siloed lifecycle management. This paper proposes the first holistic, workload-aware lifecycle co-optimization framework tailored for large language model (LLM) workloads, unifying infrastructure provisioning, hardware refresh, and runtime operation. It jointly models workload dynamics, hardware technology evolution, and system aging effects. Key innovations include: (1) integrated power-cooling-network configuration design; (2) a hardware-trend-aligned, phased refresh strategy; and (3) runtime software-hardware co-optimization. Experimental evaluation demonstrates up to 40% TCO reduction compared to state-of-the-art segmented approaches. The framework establishes a practical, system-level optimization paradigm and actionable guidelines for sustainable AI infrastructure evolution.
📝 Abstract
The rapid rise of large language models (LLMs) has been driving an enormous demand for AI inference infrastructure, mainly powered by high-end GPUs. While these accelerators offer immense computational power, they incur high capital and operational costs due to frequent upgrades, dense power consumption, and cooling demands, making total cost of ownership (TCO) for AI datacenters a critical concern for cloud providers. Unfortunately, traditional datacenter lifecycle management (designed for general-purpose workloads) struggles to keep pace with AI's fast-evolving models, rising resource needs, and diverse hardware profiles. In this paper, we rethink the AI datacenter lifecycle scheme across three stages: building, hardware refresh, and operation. We show how design choices in power, cooling, and networking provisioning impact long-term TCO. We also explore refresh strategies aligned with hardware trends. Finally, we use operation software optimizations to reduce cost. While these optimizations at each stage yield benefits, unlocking the full potential requires rethinking the entire lifecycle. Thus, we present a holistic lifecycle management framework that coordinates and co-optimizes decisions across all three stages, accounting for workload dynamics, hardware evolution, and system aging. Our system reduces the TCO by up to 40% over traditional approaches. Using our framework we provide guidelines on how to manage AI datacenter lifecycle for the future.