🤖 AI Summary
Existing reward benchmarks lack evaluation of reference-based verifiers, hindering rigorous reliability analysis of verifier modeling in RLHF. To address this, we introduce VerifyBench and VerifyBench-Hard—two dedicated benchmarks for systematically evaluating large language models’ accuracy in reference-based reward judgment on reasoning tasks. Our benchmarks employ a difficulty-stratified design, high-quality human annotation, and multi-round verification to establish a data-driven evaluation pipeline, enabling fine-grained diagnostic analysis along two dimensions: reward consistency and correctness. Experimental results reveal substantial biases in current mainstream models—especially those with fewer parameters—when performing reference-based reward judgments. The proposed benchmarks provide reproducible evaluation protocols and actionable analytical insights, filling a critical gap in RL reward assessment and advancing the development of trustworthy verifier modeling.
📝 Abstract
Large reasoning models such as OpenAI o1 and DeepSeek-R1 have achieved remarkable performance in the domain of reasoning. A key component of their training is the incorporation of verifiable rewards within reinforcement learning (RL). However, existing reward benchmarks do not evaluate reference-based reward systems, leaving researchers with limited understanding of the accuracy of verifiers used in RL. In this paper, we introduce two benchmarks, VerifyBench and VerifyBench-Hard, designed to assess the performance of reference-based reward systems. These benchmarks are constructed through meticulous data collection and curation, followed by careful human annotation to ensure high quality. Current models still show considerable room for improvement on both VerifyBench and VerifyBench-Hard, especially smaller-scale models. Furthermore, we conduct a thorough and comprehensive analysis of evaluation results, offering insights for understanding and developing reference-based reward systems. Our proposed benchmarks serve as effective tools for guiding the development of verifier accuracy and the reasoning capabilities of models trained via RL in reasoning tasks.