🤖 AI Summary
Existing GPU power managers (e.g., NVIDIA’s default policy) fail to distinguish the distinct computational characteristics between the prefill and decode phases in LLM inference, leading to voltage/frequency misconfiguration, head-of-line blocking, and suboptimal energy efficiency. This paper proposes an SLO-aware dynamic frequency scaling framework. First, it partitions the request queue by input length to eliminate head-of-line blocking. Second, it employs phase-specific optimization: static short-trajectory modeling for the compute-intensive prefill phase, and a lightweight dual-loop feedback controller for the latency-sensitive decode phase. Additionally, it introduces SM-level latency-power modeling, queue-aware scheduling, and a hysteresis-based fine-grained frequency adjustment mechanism. Evaluated on real-world traces from Alibaba Cloud and Azure, our approach achieves up to 34% energy savings with zero throughput degradation and increases SLO violation rate by less than 3.5%.
📝 Abstract
Large Language Models (LLMs) are becoming the backbone of modern cloud services, yet their inference costs are dominated by GPU energy. Unlike traditional GPU workloads, LLM inference has two stages with different characteristics: the prefill phase, which is latency sensitive and scales quadratically with prompt length, and the decode phase, which progresses token by token with unpredictable length. Current GPU power governors (for example, NVIDIA's default) overlook this asymmetry and treat both stages uniformly. The result is mismatched voltage and frequency settings, head-of-line blocking, and excessive energy use.
We introduce GreenLLM, an SLO-aware serving framework that minimizes GPU energy by explicitly separating prefill and decode control. At ingress, requests are routed into length-based queues so short prompts avoid head-of-line blocking and TTFT improves. For prefill, GreenLLM collects short traces on a GPU node, fits compact latency-power models over SM frequency, and solves a queueing-aware optimization to select energy-minimal clocks per class. During decode, a lightweight dual-loop controller tracks throughput (tokens per second) and adjusts frequency with hysteretic, fine-grained steps to hold tail TBT within target bounds. Across Alibaba and Azure trace replays, GreenLLM reduces total energy by up to 34 percent versus the default DVFS baseline, with no loss of throughput and with less than 3.5 percent additional SLO violations.