🤖 AI Summary
Dynamic output length in LLM inference causes severe load imbalance during decoding, leading to SLO violations and out-of-memory (OOM) errors. To address this, we propose an adaptive rescheduling system based on output-length prediction. Our approach introduces the first lightweight, continuous predictor that leverages native LLM hidden states—without requiring auxiliary tokens or fine-tuning—to model remaining generation length with fine-grained accuracy and minimal overhead. Based on real-time predictions, the system dynamically reallocates prefill and decode resources to enable load-aware scheduling. Experiments demonstrate a 49.42% reduction in mean absolute error (MAE) for length prediction, a 93.28% decrease in predictor parameter count, a 74.77% reduction in P99 time-per-output-token (TPOT), and up to a 2.24× improvement in goodput—achieving substantial gains in throughput and latency while maintaining system stability.
📝 Abstract
Large Language Model (LLM) inference has emerged as a fundamental paradigm. In real-world scenarios, variations in output length cause severe workload imbalance in the decode phase, particularly for long-output reasoning tasks. Existing systems, such as PD disaggregation architectures, rely on static prefill-to-decode scheduling, which often results in SLO violations and OOM failures under evolving decode workloads.
In this paper, we propose ARES, an adaptive decoding rescheduling system powered by length prediction to anticipate future workloads. Our core contributions include: (1) A lightweight and continuous LLM-native prediction method that leverages LLM hidden state to model remaining generation length with high precision (reducing MAE by 49.42%) and low overhead (cutting predictor parameters by 93.28%); (2) A rescheduling solution in decode phase with : A dynamic balancing mechanism that integrates current and predicted workloads, reducing P99 TPOT by 74.77% and achieving up to 2.24 times higher goodput.