HyperLoad: A Cross-Modality Enhanced Large Language Model-Based Framework for Green Data Center Cooling Load Prediction

📅 2025-12-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of few-shot learning, cold-start conditions, fragmented multi-source data, and distributional shifts in green data center cooling load forecasting, this paper proposes the first large language model (LLM)-driven cross-modal load forecasting paradigm. Methodologically, it innovatively integrates cross-modal embedding alignment, adaptive Prefix-tuning to inject domain-specific prior knowledge, and an enhanced global interaction attention mechanism to model inter-device temporal dependencies—enabling joint modeling of time-series sensor data and operational text. The approach consistently outperforms state-of-the-art methods under both data-sufficient and data-scarce scenarios. We release DCData, the first open benchmark dataset for data center cooling load forecasting. Empirical evaluation demonstrates significant improvements in prediction accuracy, directly supporting PUE optimization and precise carbon intensity management.

Technology Category

Application Category

📝 Abstract
The rapid growth of artificial intelligence is exponentially escalating computational demand, inflating data center energy use and carbon emissions, and spurring rapid deployment of green data centers to relieve resource and environmental stress. Achieving sub-minute orchestration of renewables, storage, and loads, while minimizing PUE and lifecycle carbon intensity, hinges on accurate load forecasting. However, existing methods struggle to address small-sample scenarios caused by cold start, load distortion, multi-source data fragmentation, and distribution shifts in green data centers. We introduce HyperLoad, a cross-modality framework that exploits pre-trained large language models (LLMs) to overcome data scarcity. In the Cross-Modality Knowledge Alignment phase, textual priors and time-series data are mapped to a common latent space, maximizing the utility of prior knowledge. In the Multi-Scale Feature Modeling phase, domain-aligned priors are injected through adaptive prefix-tuning, enabling rapid scenario adaptation, while an Enhanced Global Interaction Attention mechanism captures cross-device temporal dependencies. The public DCData dataset is released for benchmarking. Under both data sufficient and data scarce settings, HyperLoad consistently surpasses state-of-the-art (SOTA) baselines, demonstrating its practicality for sustainable green data center management.
Problem

Research questions and friction points this paper is trying to address.

Predicts cooling load in green data centers accurately
Addresses data scarcity from cold starts and distribution shifts
Leverages cross-modality LLMs for small-sample scenario adaptation
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for cross-modality knowledge alignment in latent space
Adaptive prefix-tuning for rapid scenario adaptation
Enhanced attention mechanism capturing cross-device dependencies
🔎 Similar Papers
No similar papers found.