🤖 AI Summary
Current reasoning models exhibit weak self-assessment capabilities, limiting the effectiveness of test-time computation strategies—such as best-of-n sampling and self-reflection. To address this, we propose the Paired Explanatory Verifier (PEV), a reinforcement learning framework built upon GRPO that jointly optimizes natural language inference generation and pairwise comparison to enable confidence calibration and interpretable verification. Its key contribution lies in precisely identifying complex error patterns—e.g., “double errors”—that elude conventional methods like majority voting. Extensive experiments demonstrate that PEV significantly improves both accuracy and computational efficiency of test-time strategies across diverse reasoning tasks. Crucially, it achieves this while preserving high discriminative power and enhancing the reliability of dynamic decision-making under uncertainty.
📝 Abstract
Advanced test-time computing strategies are essential for scaling reasoning models, but their effectiveness is capped by the models' poor self-evaluation. We propose a pairwise Explanatory Verifier, trained via reinforcement learning (GRPO), that produces calibrated confidence scores and associated natural language reasoning for generated solutions. Our verifier improves the accuracy and efficiency of test-time strategies like best-of-n and self-reflection. Crucially, it excels at identifying challenging failure modes, such as when both candidate solutions are identically incorrect, succeeding where standard methods like majority voting fail.