🤖 AI Summary
Autoregressive response length generation in large language models (LLMs) exhibits strong stochasticity and model- and prompt-dependent variability; existing length prediction methods either introduce bias or neglect the intrinsic heterogeneity of empirical length distributions.
Method: We construct the first large-scale LLM response length distribution benchmark—covering 13 open-source models, 7 instruction categories, and 10 responses per prompt–model pair—and propose a non-intrusive, unbiased characterization paradigm grounded in fixed decoding parameters (temperature = 1.0, top-p = 1.0), implemented via a multi-sample–quantile–normalization evaluation framework.
Contribution/Results: Our benchmark systematically reveals intra- and inter-model length distribution heterogeneity and identifies partial text degeneration phenomena. We release an open dataset with full statistical summaries and experimental configurations, empirically validating the high unpredictability of response length. This provides a reproducible, reliable foundation for length prediction research and inference scheduling optimization.
📝 Abstract
Efficiently managing compute resources for Large Language Model (LLM) inference remains challenging due to the inherently stochastic and variable lengths of autoregressive text generation. Accurately estimating response lengths in advance enables proactive resource allocation, yet existing approaches either bias text generation towards certain lengths or rely on assumptions that ignore model- and prompt-specific variability. We introduce CASTILLO, a dataset characterizing response length distributions across 13 widely-used open-source LLMs evaluated on seven distinct instruction-following corpora. For each $langle$prompt, model$
angle$ sample pair, we generate 10 independent completions using fixed decoding hyper-parameters, record the token length of each response, and publish summary statistics (mean, std-dev, percentiles), along with the shortest and longest completions, and the exact generation settings. Our analysis reveals significant inter- and intra-model variability in response lengths (even under identical generation settings), as well as model-specific behaviors and occurrences of partial text degeneration in only subsets of responses. CASTILLO enables the development of predictive models for proactive scheduling and provides a systematic framework for analyzing model-specific generation behaviors. We publicly release the dataset and code to foster research at the intersection of generative language modeling and systems.