🤖 AI Summary
Existing research on dynamic inference-time computation allocation primarily focuses on parallel generation (e.g., best-of-N) while neglecting incremental decoding (e.g., beam search) and end-to-end latency. Method: We formulate inference-time computation optimization as a joint problem of dynamic computational resource allocation and decoding strategy selection, unifying both token consumption and end-to-end latency as dual cost objectives. We propose the first dynamic framework supporting coordinated scheduling of parallel and incremental decoding, which—based on real-time query features—decides both the decoding strategy (e.g., best-of-N vs. beam search) and the associated computational budget. Contribution/Results: Evaluated on standard inference benchmarks, our approach significantly outperforms static strategies, achieving superior trade-offs among accuracy, computational cost, and latency. It demonstrates strong practical deployability for real-world large language model serving.
📝 Abstract
Inference-time scaling has emerged as a powerful way to improve large language model (LLM) performance by generating multiple candidate responses and selecting among them. However, existing work on dynamic allocation for test-time compute typically considers only parallel generation methods such as best-of-N, overlooking incremental decoding methods like beam search, and has largely ignored latency, focusing only on token usage. We formulate inference-time scaling as a problem of dynamic compute allocation and method selection, where the system must decide which strategy to apply and how much compute to allocate on a per-query basis. Our framework explicitly incorporates both token cost and wall-clock latency, the latter being critical for user experience and particularly for agentic workflows where models must issue multiple queries efficiently. Experiments on reasoning benchmarks show that our approach consistently outperforms static strategies, achieving favorable accuracy-cost trade-offs while remaining practical for deployment.