🤖 AI Summary
To address the tension between high GPU power consumption and stringent latency requirements in interactive large language model (LLM) inference, this paper proposes an adaptive GPU frequency control framework based on online reinforcement learning (RL). The method dynamically optimizes GPU core clock frequency by continuously monitoring real-time request load and end-to-end latency characteristics. Crucially, it introduces an action-space pruning mechanism that accelerates policy decision-making while ensuring latency sensitivity—limiting latency overhead to under 10%. Evaluated under realistic, time-varying workloads, the system achieves a 44.3% reduction in GPU energy consumption and a 40.3% improvement in overall energy efficiency, significantly lowering operational costs for cloud-based LLM inference clusters. The key contribution lies in the first integration of lightweight online RL with fine-grained GPU frequency tuning—achieving a practical balance among energy efficiency, latency guarantees, and deployability.
📝 Abstract
The explosive growth of interactive Large Language Models (LLMs) has placed unprecedented demands for low latency on cloud GPUs, forcing them into high-power modes and causing escalating energy costs. Real-time inference workloads exhibit significant dynamic volatility, presenting substantial energy-saving opportunities. However, traditional static or rule-based power management strategies struggle to exploit these opportunities without compromising peak performance. To address this challenge, we propose AGFT (An Adaptive GPU Frequency Tuner), a framework that employs online reinforcement learning to autonomously learn an optimal frequency tuning policy. By monitoring real-time features like request load and latency, AGFT utilizes fine-grained frequency control for precise adjustments and intelligent action space pruning for stable, efficient decision-making. This creates a robust, automated energy management solution. We comprehensively evaluated AGFT in an environment simulating realistic, fluctuating inference requests. The experimental results demonstrate that AGFT successfully saves 44.3% of GPU energy consumption while introducing a minimal performance latency overhead of under 10%. This achievement translates into a comprehensive Energy-Delay Product (EDP) optimization of up to 40.3%, clearly showing that our framework can significantly enhance the energy efficiency and economic benefits of existing LLM inference clusters without compromising service quality.