Coordinated Cooling and Compute Management for AI Datacenters

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the inefficiency of AI data centers in large language model inference, often exacerbated by inadequate thermal management. The authors propose the first hierarchical framework that jointly optimizes computation scheduling and cooling control. By co-modeling GPU computational load and thermal dynamics, the framework coordinately adjusts parallelism, dynamic voltage and frequency scaling (DVFS), and cooling strategies to maximize energy efficiency while satisfying latency and thermal constraints. Simulations based on real-world Azure inference traces demonstrate that the approach effectively balances performance, energy consumption, and carbon emissions, overcoming the limitations of conventional methods that focus solely on computational optimization.

Technology Category

Application Category

📝 Abstract
The AI datacenters are currently being deployed on a large scale to support the training and deployment of power-intensive large-language models (LLMs). Extensive amount of computation and cooling required in datacenters increase concerns about the energy use and carbon emissions of AI datacenters. Although current state-of-the-art has examined the energy efficiency of LLM inference, most prior research focused on optimizing compute-side scheduling without considering thermal objectives or constraints. Since GPU-intensive inference generates substantial heat that can degrade datacenter performance, ignoring thermal effects can increase total energy consumption and reduce the efficiency of LLM serving. To fill this gap, we profile the characteristics of GPU servers under varying cooling and AI jobs, and develop a joint cooling and computing modeling approach for AI datacenters. Built upon such workload and thermal dynamics models, a novel hierarchical control framework is proposed to co-optimize computing and thermal management by identifying the optimal GPU parallelism, frequency (DVFS), and cooling control knobs. Using real Azure inference traces and detailed GPU profiling, our model balances serving latency and thermal constraints in AI datacenters while significantly improving AI datacenters'energy efficiency.
Problem

Research questions and friction points this paper is trying to address.

AI datacenters
energy efficiency
thermal management
large language models
cooling
Innovation

Methods, ideas, or system contributions that make the work stand out.

coordinated cooling
compute-thermal co-optimization
hierarchical control
GPU thermal modeling
energy-efficient AI datacenters
🔎 Similar Papers
No similar papers found.
N
Nardos Belay Abera
Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB, Canada
Yize Chen
Yize Chen
Assistant Professor, University of Alberta
Machine LearningPower SystemsOptimizationControl