Predicting LLM Output Length via Entropy-Guided Representations

πŸ“… 2026-02-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the substantial padding overhead in batched inference of large language models (LLMs) caused by the long-tailed distribution of output lengths. Existing static length prediction methods suffer from high computational costs, poor generalization, and limited adaptability to stochastic generation scenarios. To overcome these limitations, we propose a lightweight length prediction framework that achieves high-accuracy estimation with zero additional overhead by reusing the main model’s hidden states and token entropy. Our approach combines entropy-guided token pooling (EGTP) for static length estimation and dynamically predicts remaining lengths during decoding. We introduce ForeLen, the first comprehensive benchmark for output length prediction, and demonstrate that EGTP reduces the mean absolute error by 29.16%. When integrated with a length-aware scheduler, our method significantly improves end-to-end throughput, establishing a new baseline for efficient LLM inference.

Technology Category

Application Category

πŸ“ Abstract
The long-tailed distribution of sequence lengths in LLM serving and reinforcement learning (RL) sampling causes significant computational waste due to excessive padding in batched inference. Existing methods rely on auxiliary models for static length prediction, but they incur high overhead, generalize poorly, and fail in stochastic"one-to-many"sampling scenarios. We introduce a lightweight framework that reuses the main model's internal hidden states for efficient length prediction. Our framework features two core components: 1) Entropy-Guided Token Pooling (EGTP), which uses on-the-fly activations and token entropy for highly accurate static prediction with negligible cost, and 2) Progressive Length Prediction (PLP), which dynamically estimates the remaining length at each decoding step to handle stochastic generation. To validate our approach, we build and release ForeLen, a comprehensive benchmark with long-sequence, Chain-of-Thought, and RL data. On ForeLen, EGTP achieves state-of-the-art accuracy, reducing MAE by 29.16\% over the best baseline. Integrating our methods with a length-aware scheduler yields significant end-to-end throughput gains. Our work provides a new technical and evaluation baseline for efficient LLM inference.
Problem

Research questions and friction points this paper is trying to address.

LLM output length prediction
long-tailed sequence length
computational waste
batched inference padding
stochastic generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy-Guided Token Pooling
Progressive Length Prediction
LLM inference efficiency
length prediction
ForeLen benchmark
πŸ”Ž Similar Papers
No similar papers found.
H
Huanyi Xie
King Abdullah University of Science and Technology (KAUST); Provable Responsible AI and Data Analytics (PRADA) Lab
Y
Yubin Chen
Mohamed bin Zayed University of Artificial Intelligence (MBZUAI)
L
Liangyu Wang
King Abdullah University of Science and Technology (KAUST); Provable Responsible AI and Data Analytics (PRADA) Lab
Lijie Hu
Lijie Hu
Assistant Professor, MBZUAI
Explainable AILLMDifferential Privacy
Di Wang
Di Wang
King Abdullah University of Science and Technology
Differential PrivacyMachine UnlearningKnowledge Editing