$V_1$: Unifying Generation and Self-Verification for Parallel Reasoners

πŸ“… 2026-03-04
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of reliably identifying correct answers among multiple candidate solutions in complex reasoning tasks, where existing methods often fail due to their reliance on scalar scoring with independent verification. The authors propose the $V_1$ framework, whose key insight is that pairwise self-verification outperforms independent scoring. Building on this, they introduce an uncertainty-guided tournament ranking algorithm ($V_1$-Infer) and a pairwise reinforcement learning training framework ($V_1$-PairRL), jointly optimizing generation and verification. The approach dynamically allocates verification computation, achieving significant performance gains on code generation and mathematical reasoning benchmarks: up to a 10% improvement in Pass@1 over state-of-the-art test-time scaling methods, and a 7–9% gain in test-time performance with $V_1$-PairRL compared to standard RL, along with an 8.7% absolute Pass@1 improvement and higher computational efficiency.

Technology Category

Application Category

πŸ“ Abstract
Test-time scaling for complex reasoning tasks shows that leveraging inference-time compute, by methods such as independently sampling and aggregating multiple solutions, results in significantly better task outcomes. However, a critical bottleneck is verification: sampling is only effective if correct solutions can be reliably identified among candidates. While existing approaches typically evaluate candidates independently via scalar scoring, we demonstrate that models are substantially stronger at pairwise self-verification. Leveraging this insight, we introduce $V_1$, a framework that unifies generation and verification through efficient pairwise ranking. $V_1$ comprises two components: $V_1$-Infer, an uncertainty-guided algorithm using a tournament-based ranking that dynamically allocates self-verification compute to candidate pairs whose relative correctness is most uncertain; and $V_1$-PairRL, an RL framework that jointly trains a single model as both generator and pairwise self-verifier, ensuring the verifier adapts to the generator's evolving distribution. On code generation (LiveCodeBench, CodeContests, SWE-Bench) and math reasoning (AIME, HMMT) benchmarks, $V_1$-Infer improves Pass@1 by up to $10%$ over pointwise verification and outperforms recent test-time scaling methods while being significantly more efficient. Furthermore, $V_1$-PairRL achieves $7$--$9%$ test-time scaling gains over standard RL and pointwise joint training, and improves base Pass@1 by up to 8.7% over standard RL in a code-generation setting.
Problem

Research questions and friction points this paper is trying to address.

test-time scaling
complex reasoning
verification
candidate selection
self-verification
Innovation

Methods, ideas, or system contributions that make the work stand out.

pairwise self-verification
test-time scaling
uncertainty-guided ranking
joint generation-verification
reinforcement learning
πŸ”Ž Similar Papers
No similar papers found.