🤖 AI Summary
This work addresses a critical limitation in existing answer verifiers, which assess only final answer correctness while ignoring errors in the reasoning process—thereby misclassifying correctly answered but erroneously derived solutions as valid. To remedy this, the paper introduces PRIME, a novel benchmark that pioneers the concept of process–result alignment verification, systematically evaluating verifiers’ ability to jointly judge reasoning consistency and answer correctness on challenging STEM tasks in mathematics and engineering. Leveraging a high-quality dataset curated via consistency-based filtering, the authors propose RLVR, a training paradigm integrating process-aware reinforcement learning with a verifiability-aware reward mechanism. Experiments demonstrate that RLVR-enhanced verifiers achieve performance gains of 8.29%, 9.12%, and 7.31% on AIME24, AIME25, and Beyond-AIME, respectively, with verifier accuracy strongly correlated to RLVR efficacy (R² > 0.92).
📝 Abstract
While model-based verifiers are essential for scaling Reinforcement Learning with Verifiable Rewards (RLVR), current outcome-centric verification paradigms primarily focus on the consistency between the final result and the ground truth, often neglecting potential errors in the derivation process. This leads to assigning positive rewards to correct answers produced from incorrect derivations. To bridge this gap, we introduce PRIME, a benchmark for evaluating verifiers on Process-Outcome Alignment verification in Mathematics and Engineering. Curated from a comprehensive collection of college-level STEM problems, PRIME comprises 2,530 high-difficulty samples through a consistency-based filtering pipeline. Through extensive evaluation, we find that current verifiers frequently fail to detect derivation flaws. Furthermore, we propose a process-aware RLVR training paradigm utilizing verifiers selected via PRIME. This approach substantially outperforms the outcome-only verification baseline, achieving absolute performance gains of 8.29%, 9.12%, and 7.31% on AIME24, AIME25, and Beyond-AIME, respectively, for the Qwen3-14B-Base model. Finally, we demonstrate a strong linear correlation ($R^2>0.92$) between verifier accuracy on PRIME and RLVR training effectiveness, validating PRIME as a reliable predictor for verifier selection.