Process Reward Model with Q-Value Rankings

πŸ“… 2024-10-15
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 1
✨ Influential: 1
πŸ“„ PDF
πŸ€– AI Summary
Existing process reward models (PRMs) treat step-wise evaluation as independent classification tasks, neglecting dependencies among reasoning stepsβ€”leading to coarse-grained and suboptimal reward distributions. This work formalizes PRM learning as a Markov decision process (MDP) for the first time and introduces a Q-value-based ranking loss to enable fine-grained, theoretically grounded process-level reward learning. Our method integrates Q-value modeling, multi-policy sampling, process-level supervision within a reinforcement learning framework, and backbone-agnostic model adaptation. On multi-step reasoning benchmarks, it consistently outperforms classification-based PRMs. Ablation studies confirm that the proposed comparative loss is the primary driver of performance gains, significantly improving stability and generalization across diverse language models and sampling strategies.

Technology Category

Application Category

πŸ“ Abstract
Process Reward Modeling (PRM) is critical for complex reasoning and decision-making tasks where the accuracy of intermediate steps significantly influences the overall outcome. Existing PRM approaches, primarily framed as classification problems, employ cross-entropy loss to independently evaluate each step's correctness. This method can lead to suboptimal reward distribution and does not adequately address the interdependencies among steps. To address these limitations, we introduce the Process Q-value Model (PQM), a novel framework that redefines PRM in the context of a Markov Decision Process. PQM optimizes Q-value rankings based on a novel comparative loss function, enhancing the model's ability to capture the intricate dynamics among sequential decisions. This approach provides a more granular and theoretically grounded methodology for process rewards. Our extensive empirical evaluations across various sampling policies, language model backbones, and multi-step reasoning benchmarks show that PQM outperforms classification-based PRMs. The effectiveness of the comparative loss function is highlighted in our comprehensive ablation studies, confirming PQM's practical efficacy and theoretical advantage.
Problem

Research questions and friction points this paper is trying to address.

Optimizes Q-value rankings for process rewards.
Addresses interdependencies in sequential decision-making tasks.
Enhances accuracy in complex reasoning through comparative loss.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Process Q-value Model (PQM)
Markov Decision Process context
Comparative loss function optimization
πŸ”Ž Similar Papers
No similar papers found.
Wendi Li
Wendi Li
PhD at UW-Madison
Y
Yixuan Li
Department of Computer Sciences, University of Wisconsin-Madison