Latency and Token-Aware Test-Time Compute

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing research on dynamic inference-time computation allocation primarily focuses on parallel generation (e.g., best-of-N) while neglecting incremental decoding (e.g., beam search) and end-to-end latency. Method: We formulate inference-time computation optimization as a joint problem of dynamic computational resource allocation and decoding strategy selection, unifying both token consumption and end-to-end latency as dual cost objectives. We propose the first dynamic framework supporting coordinated scheduling of parallel and incremental decoding, which—based on real-time query features—decides both the decoding strategy (e.g., best-of-N vs. beam search) and the associated computational budget. Contribution/Results: Evaluated on standard inference benchmarks, our approach significantly outperforms static strategies, achieving superior trade-offs among accuracy, computational cost, and latency. It demonstrates strong practical deployability for real-world large language model serving.

Technology Category

Application Category

📝 Abstract
Inference-time scaling has emerged as a powerful way to improve large language model (LLM) performance by generating multiple candidate responses and selecting among them. However, existing work on dynamic allocation for test-time compute typically considers only parallel generation methods such as best-of-N, overlooking incremental decoding methods like beam search, and has largely ignored latency, focusing only on token usage. We formulate inference-time scaling as a problem of dynamic compute allocation and method selection, where the system must decide which strategy to apply and how much compute to allocate on a per-query basis. Our framework explicitly incorporates both token cost and wall-clock latency, the latter being critical for user experience and particularly for agentic workflows where models must issue multiple queries efficiently. Experiments on reasoning benchmarks show that our approach consistently outperforms static strategies, achieving favorable accuracy-cost trade-offs while remaining practical for deployment.
Problem

Research questions and friction points this paper is trying to address.

Optimizing dynamic compute allocation for LLM inference-time scaling
Balancing token cost and latency in test-time compute strategies
Improving accuracy-cost trade-offs for agentic workflows and user experience
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic compute allocation per query
Incorporates both token cost and latency
Outperforms static inference-time strategies
🔎 Similar Papers
No similar papers found.
Jenny Y. Huang
Jenny Y. Huang
PhD Student, Massachusetts Institute of Technology
Machine LearningStatistics
Mehul Damani
Mehul Damani
MIT
Reinforcement LearningMulti-Agent Systems
Y
Yousef El-Kurdi
IBM Research, MIT-IBM Watson AI Lab
R
Ramon Astudillo
IBM Research, MIT-IBM Watson AI Lab
W
Wei Sun
IBM Research, MIT-IBM Watson AI Lab