🤖 AI Summary
LLM inference scaling faces two key challenges: reliance on external verifiers complicates deployment, and existing methods neglect practical computational constraints. This paper proposes a verifier-free, efficient dynamic scaling framework. Methodologically, it introduces (1) a novel parallel-serial hybrid sampling strategy that balances reasoning path diversity with computational controllability, and (2) an uncertainty-driven budget allocation mechanism based on multi-armed bandits, enabling adaptive optimization of computational resources. Evaluated across multiple reasoning tasks, our approach substantially outperforms verifier-free baselines—improving accuracy while reducing inference cost by over 30%. The framework establishes a new paradigm for resource-efficient LLM inference in compute-constrained settings.
📝 Abstract
Inference-time scaling has proven effective in boosting large language model (LLM) performance through increased test-time computation. Yet, its practical application is often hindered by reliance on external verifiers or a lack of optimization for realistic computational constraints. We propose DynScaling, which addresses these limitations through two primary innovations: an integrated parallel-sequential sampling strategy and a bandit-based dynamic budget allocation framework. The integrated sampling strategy unifies parallel and sequential sampling by constructing synthetic sequential reasoning chains from initially independent parallel responses, promoting diverse and coherent reasoning trajectories. The dynamic budget allocation framework formulates the allocation of computational resources as a multi-armed bandit problem, adaptively distributing the inference budget across queries based on the uncertainty of previously sampled responses, thereby maximizing computational efficiency. By combining these components, DynScaling effectively improves LLM performance under practical resource constraints without the need for external verifiers. Experimental results demonstrate that DynScaling consistently surpasses existing verifier-free inference scaling baselines in both task performance and computational cost.