π€ AI Summary
This work addresses the fundamental trade-off between inference accuracy and computational efficiency in test-time scaling (TTS), arising from imbalanced validator invocation granularity. We present the first systematic investigation into how validation granularity affects TTS performance. To this end, we propose VG-Searchβa tunable-granularity unified search framework supporting generalized beam search and Best-of-N strategies. VG-Search introduces dynamic granularity scheduling and generation-validation co-optimization to achieve adaptive balance between accuracy and compute cost. Experiments demonstrate that our method improves accuracy by 3.1% over standard beam search and by 3.6% over Best-of-N, while reducing FLOPs by over 52%. These results significantly enhance both the scalability and practical utility of TTS.
π Abstract
Test-time scaling (TTS) has proven effective in enhancing the reasoning capabilities of large language models (LLMs). Verification plays a key role in TTS, simultaneously influencing (1) reasoning performance and (2) compute efficiency, due to the quality and computational cost of verification. In this work, we challenge the conventional paradigms of verification, and make the first attempt toward systematically investigating the impact of verification granularity-that is, how frequently the verifier is invoked during generation, beyond verifying only the final output or individual generation steps. To this end, we introduce Variable Granularity Search (VG-Search), a unified algorithm that generalizes beam search and Best-of-N sampling via a tunable granularity parameter g. Extensive experiments with VG-Search under varying compute budgets, generator-verifier configurations, and task attributes reveal that dynamically selecting g can improve the compute efficiency and scaling behavior. Building on these findings, we propose adaptive VG-Search strategies that achieve accuracy gains of up to 3.1% over Beam Search and 3.6% over Best-of-N, while reducing FLOPs by over 52%. We will open-source the code to support future research.