🤖 AI Summary
Fixed inference token budgets in large language models often lead to redundant computation on simple queries and insufficient computation on complex ones, making it challenging to balance efficiency and accuracy. This work proposes Predictive Scheduling, a framework that dynamically allocates a fixed total token budget across queries based on their estimated difficulty. Difficulty is predicted using a lightweight predictor—either derived from intermediate-layer hidden states of the Transformer or a LoRA-finetuned classifier—and combined with a greedy batch allocator to adjust per-query token assignments. To our knowledge, this is the first approach to enable dynamic scheduling of inference resources according to query complexity. On GSM8K, it achieves a 7.9 percentage point absolute accuracy gain over uniform allocation at the same computational cost, closing more than 50% of the gap to an ideal oracle scheduler.
📝 Abstract
Large language models (LLMs) achieve state-of-the-art accuracy on complex reasoning tasks by generating multiple chain-of-thought (CoT) traces, but using a fixed token budget per query leads to over-computation on easy inputs and under-computation on hard ones. We introduce Predictive Scheduling, a plug-and-play framework that pre-runs lightweight predictors, an MLP on intermediate transformer hidden states or a LoRA-fine-tuned classifier on raw question text, to estimate each query's optimal reasoning length or difficulty before any full generation. Our greedy batch allocator dynamically distributes a fixed total token budget across queries to maximize expected accuracy. On the GSM8K arithmetic benchmark, predictive scheduling yields up to 7.9 percentage points of absolute accuracy gain over uniform budgeting at identical token cost, closing over 50\% of the gap to an oracle with perfect foresight. A systematic layer-wise study reveals that middle layers (12 - 17) of the transformer carry the richest signals for size estimation. These results demonstrate that pre-run budget prediction enables fine-grained control of the compute-accuracy trade-off, offering a concrete path toward latency-sensitive, cost-efficient LLM deployments.