AGFT: An Adaptive GPU Frequency Tuner for Real-Time LLM Inference Optimization

📅 2025-08-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the tension between high GPU power consumption and stringent latency requirements in interactive large language model (LLM) inference, this paper proposes an adaptive GPU frequency control framework based on online reinforcement learning (RL). The method dynamically optimizes GPU core clock frequency by continuously monitoring real-time request load and end-to-end latency characteristics. Crucially, it introduces an action-space pruning mechanism that accelerates policy decision-making while ensuring latency sensitivity—limiting latency overhead to under 10%. Evaluated under realistic, time-varying workloads, the system achieves a 44.3% reduction in GPU energy consumption and a 40.3% improvement in overall energy efficiency, significantly lowering operational costs for cloud-based LLM inference clusters. The key contribution lies in the first integration of lightweight online RL with fine-grained GPU frequency tuning—achieving a practical balance among energy efficiency, latency guarantees, and deployability.

Technology Category

Application Category

📝 Abstract
The explosive growth of interactive Large Language Models (LLMs) has placed unprecedented demands for low latency on cloud GPUs, forcing them into high-power modes and causing escalating energy costs. Real-time inference workloads exhibit significant dynamic volatility, presenting substantial energy-saving opportunities. However, traditional static or rule-based power management strategies struggle to exploit these opportunities without compromising peak performance. To address this challenge, we propose AGFT (An Adaptive GPU Frequency Tuner), a framework that employs online reinforcement learning to autonomously learn an optimal frequency tuning policy. By monitoring real-time features like request load and latency, AGFT utilizes fine-grained frequency control for precise adjustments and intelligent action space pruning for stable, efficient decision-making. This creates a robust, automated energy management solution. We comprehensively evaluated AGFT in an environment simulating realistic, fluctuating inference requests. The experimental results demonstrate that AGFT successfully saves 44.3% of GPU energy consumption while introducing a minimal performance latency overhead of under 10%. This achievement translates into a comprehensive Energy-Delay Product (EDP) optimization of up to 40.3%, clearly showing that our framework can significantly enhance the energy efficiency and economic benefits of existing LLM inference clusters without compromising service quality.
Problem

Research questions and friction points this paper is trying to address.

Optimize GPU energy use in real-time LLM inference
Reduce latency and power costs dynamically
Adaptive frequency tuning for energy-efficient AI clusters
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online reinforcement learning for GPU frequency tuning
Fine-grained frequency control for precise adjustments
Intelligent action space pruning for stable decisions
🔎 Similar Papers
No similar papers found.
Z
Zicong Ye
The Hong Kong University of Science and Technology (Guangzhou), Guangdong, Guangzhou, China
K
Kunming Zhang
The Hong Kong University of Science and Technology (Guangzhou), Guangdong, Guangzhou, China
Guoming Tang
Guoming Tang
The Hong Kong University of Science and Technology (Guangzhou)
Sustainable Computing/AICloud/Edge ComputingAI4Sus