GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning

📅 2025-04-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing process reward models (PRMs) suffer from three key limitations: coarse-grained supervision, poor generalization, scalar reward modeling that constrains LLM generation capabilities, and non-scalable inference-time computation. This paper introduces GenPRM—the first generative PRM—that scores each reasoning step explicitly via chain-of-thought reasoning and code-execution-based verification, breaking the scalar-reward paradigm. Its core innovations include a relative progress estimation (RPE) mechanism, a reasoning synthesis framework augmented with executable code validation, and a scalable inference architecture supporting multi-path sampling and weighted aggregation at test time. Trained on only 23K lightweight samples, the 1.5B GenPRM outperforms GPT-4o, while the 7B variant surpasses Qwen2.5-Math-PRM-72B. GenPRM achieves state-of-the-art performance on ProcessBench and mathematical reasoning benchmarks, serving as an efficient, fine-grained critic for policy models.

Technology Category

Application Category

📝 Abstract
Recent advancements in Large Language Models (LLMs) have shown that it is promising to utilize Process Reward Models (PRMs) as verifiers to enhance the performance of LLMs. However, current PRMs face three key challenges: (1) limited process supervision and generalization capabilities, (2) dependence on scalar value prediction without leveraging the generative abilities of LLMs, and (3) inability to scale the test-time compute of PRMs. In this work, we introduce GenPRM, a generative process reward model that performs explicit Chain-of-Thought (CoT) reasoning with code verification before providing judgment for each reasoning step. To obtain high-quality process supervision labels and rationale data, we propose Relative Progress Estimation (RPE) and a rationale synthesis framework that incorporates code verification. Experimental results on ProcessBench and several mathematical reasoning tasks show that GenPRM significantly outperforms prior PRMs with only 23K training data from MATH dataset. Through test-time scaling, a 1.5B GenPRM outperforms GPT-4o, and a 7B GenPRM surpasses Qwen2.5-Math-PRM-72B on ProcessBench. Additionally, GenPRM demonstrates strong abilities to serve as a critic model for policy model refinement. This work establishes a new paradigm for process supervision that bridges the gap between PRMs and critic models in LLMs. Our code, model, and data will be available in https://ryanliu112.github.io/GenPRM.
Problem

Research questions and friction points this paper is trying to address.

Enhancing PRMs with generative reasoning for better process supervision
Overcoming PRMs' scalar value dependency via Chain-of-Thought and code verification
Scaling test-time compute of PRMs to outperform large models efficiently
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative reasoning with code verification
Relative Progress Estimation for supervision
Test-time compute scaling for performance
🔎 Similar Papers
No similar papers found.
J
Jian Zhao
Tsinghua University
R
Runze Liu
Tsinghua University, Shanghai AI Laboratory
Kaiyan Zhang
Kaiyan Zhang
Tsinghua University
Foundation ModelCollective IntelligenceScientific Intelligence
Z
Zhimu Zhou
BUPT
Junqi Gao
Junqi Gao
Shanghai AI Lab, 哈尔滨工业大学
Deep LearningGenerative ModelsContinual Learning
D
Dong Li
Harbin Institute of Technology
Jiafei Lyu
Jiafei Lyu
PhD of Control Science and Engineering, Tsinghua University
deep reinforcement learning
Z
Zhouyi Qian
Harbin Institute of Technology
B
Biqing Qi
Shanghai AI Laboratory
Xiu Li
Xiu Li
Bytedance Seed
Computer VisionComputer Graphics3D Vision
B
Bowen Zhou
Tsinghua University, Shanghai AI Laboratory