DynScaling: Efficient Verifier-free Inference Scaling via Dynamic and Integrated Sampling

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
LLM inference scaling faces two key challenges: reliance on external verifiers complicates deployment, and existing methods neglect practical computational constraints. This paper proposes a verifier-free, efficient dynamic scaling framework. Methodologically, it introduces (1) a novel parallel-serial hybrid sampling strategy that balances reasoning path diversity with computational controllability, and (2) an uncertainty-driven budget allocation mechanism based on multi-armed bandits, enabling adaptive optimization of computational resources. Evaluated across multiple reasoning tasks, our approach substantially outperforms verifier-free baselines—improving accuracy while reducing inference cost by over 30%. The framework establishes a new paradigm for resource-efficient LLM inference in compute-constrained settings.

Technology Category

Application Category

📝 Abstract
Inference-time scaling has proven effective in boosting large language model (LLM) performance through increased test-time computation. Yet, its practical application is often hindered by reliance on external verifiers or a lack of optimization for realistic computational constraints. We propose DynScaling, which addresses these limitations through two primary innovations: an integrated parallel-sequential sampling strategy and a bandit-based dynamic budget allocation framework. The integrated sampling strategy unifies parallel and sequential sampling by constructing synthetic sequential reasoning chains from initially independent parallel responses, promoting diverse and coherent reasoning trajectories. The dynamic budget allocation framework formulates the allocation of computational resources as a multi-armed bandit problem, adaptively distributing the inference budget across queries based on the uncertainty of previously sampled responses, thereby maximizing computational efficiency. By combining these components, DynScaling effectively improves LLM performance under practical resource constraints without the need for external verifiers. Experimental results demonstrate that DynScaling consistently surpasses existing verifier-free inference scaling baselines in both task performance and computational cost.
Problem

Research questions and friction points this paper is trying to address.

Efficient inference scaling without external verifiers
Dynamic budget allocation for computational efficiency
Integrated parallel-sequential sampling for diverse reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrated parallel-sequential sampling strategy
Bandit-based dynamic budget allocation
Verifier-free efficient inference scaling
🔎 Similar Papers
No similar papers found.
F
Fei Wang
Google, University of Southern California
Xingchen Wan
Xingchen Wan
Google
R
Ruoxi Sun
Google
J
Jiefeng Chen
Google
Sercan O. Arik
Sercan O. Arik
Google
Machine LearningArtificial IntelligenceSignal Processing