Hard2Verify: A Step-Level Verification Benchmark for Open-Ended Frontier Math

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Verifying individual reasoning steps in open-ended, frontier mathematical problems remains highly challenging due to the lack of fine-grained, human-annotated benchmarks. Method: We introduce the first large-scale, manually annotated step-level mathematical proof verification benchmark, covering diverse, fine-grained error types. Our evaluation framework integrates generative critique models, process-aware reward modeling, and multi-model collaborative assessment to systematically evaluate 29 open- and closed-source verifiers. Contributions/Results: (1) We establish the first high-difficulty, reproducible step-level verification benchmark; (2) We demonstrate that closed-source models significantly outperform open-source ones—by 23.6% average accuracy—and identify formal understanding, context dependency, and error propagation as critical bottlenecks; (3) We empirically validate the efficacy of self-correction mechanisms and propose a scalable pathway for enhancing verification capability.

Technology Category

Application Category

📝 Abstract
Large language model (LLM)-based reasoning systems have recently achieved gold medal-level performance in the IMO 2025 competition, writing mathematical proofs where, to receive full credit, each step must be not only correct but also sufficiently supported. To train LLM-based reasoners in such challenging, open-ended settings, strong verifiers capable of catching step-level mistakes are necessary prerequisites. We introduce Hard2Verify, a human-annotated, step-level verification benchmark produced with over 500 hours of human labor. Hard2Verify is designed to rigorously assess step-level verifiers at the frontier: Verifiers must provide step-level annotations or identify the first error in responses generated by frontier LLMs for very recent, challenging, and open-ended math questions. We evaluate 29 generative critics and process reward models, demonstrating that, beyond a few standouts, open-source verifiers lag closed source models. We subsequently analyze what drives poor performance in step-level verification, the impacts of scaling verifier compute, as well as fundamental questions such as self-verification and verification-generation dynamics.
Problem

Research questions and friction points this paper is trying to address.

Assessing step-level verification capabilities for mathematical reasoning systems
Identifying first errors in LLM-generated proofs for challenging math problems
Evaluating performance gaps between open-source and closed-source verification models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Human-annotated step-level verification benchmark
Evaluates verifiers on frontier LLM math responses
Analyzes performance drivers and scaling compute impacts
🔎 Similar Papers
No similar papers found.