DCoPilot: Generative AI-Empowered Policy Adaptation for Dynamic Data Center Operations

📅 2026-02-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of maintaining service-level agreement (SLA) compliance in dynamic data centers, where high power density and rapidly shifting workloads hinder traditional reinforcement learning approaches from adapting promptly, often leading to service disruptions. To overcome this, the authors propose DCoPilot, a novel framework that leverages large language models (LLMs) to symbolically generate structured reward functions and integrates hypernetworks to enable zero-shot, minute-scale adaptive control policy synthesis. By combining deep reinforcement learning, meta-policy distillation, and online adaptation mechanisms, DCoPilot achieves near-zero constraint violations across five SimReady control tasks, substantially outperforming existing baselines. Ablation studies further confirm that LLM-generated rewards play a critical role in ensuring stable and efficient convergence of the hypernetwork.

Technology Category

Application Category

📝 Abstract
Modern data centers (DCs) hosting artificial intelligence (AI)-dedicated devices operate at high power densities with rapidly varying workloads, making minute-level adaptation essential for safe and energy-efficient operation. However, manually designing piecewise deep reinforcement learning (DRL) agents cannot keep pace with frequent dynamics shifts and service-level agreement (SLA) changes of an evolving DC. This specification-to-policy lag causes a lack of timely, effective control policies, which may lead to service outages. To bridge the gap, we present DCoPilot, a hybrid framework for generative control policies in dynamic DC operation. DCoPilot synergizes two distinct generative paradigms, i.e., a large language model (LLM) that performs symbolic generation of structured reward forms, and a hypernetwork that conducts parametric generation of policy weights. DCoPilot operates through three coordinated phases: (i) simulation scale-up, which stress-tests reward candidates across diverse simulation-ready (SimReady) scenes; (ii) meta policy distillation, where a hypernetwork is trained to output policy weights conditioned on SLA and scene embeddings; and (iii) online adaptation, enabling zero-shot policy generation in response to updated specifications. Evaluated across five control task families spanning diverse DC components, DCoPilot achieves near-zero constraint violations and outperforms all baselines across specification variations. Ablation studies validate the effectiveness of LLM-based unified reward generation in enabling stable hypernetwork convergence.
Problem

Research questions and friction points this paper is trying to address.

data center
dynamic workload
service-level agreement
policy adaptation
control policy
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative AI
large language model (LLM)
hypernetwork
policy adaptation
data center control
🔎 Similar Papers
No similar papers found.