🤖 AI Summary
Existing process reward models (PRMs) rely solely on unidirectional left-to-right evaluation, limiting their ability to leverage global context for verifying the consistency of early reasoning steps. To address this, we propose the Bidirectional Process Reward Model (BiPRM), the first PRM framework to incorporate a parallel right-to-left evaluation stream. By applying prompt engineering to reverse reasoning trajectories, BiPRM enables fine-grained, bidirectional step-wise scoring without introducing additional parameters or inference latency, thereby supporting backward verification of prior reasoning steps. Our method is fully compatible with mainstream PRM architectures and is uniformly integrated across three backbone models and three distinct PRM objectives. On two mathematical reasoning benchmarks, BiPRM achieves up to a 31.9% improvement in step-level reward estimation accuracy over unidirectional baselines, significantly enhancing reasoning consistency, robustness, and generalization.
📝 Abstract
Process Reward Models (PRMs) have emerged as a promising approach to enhance the reasoning quality of Large Language Models (LLMs) by assigning fine-grained scores to intermediate reasoning steps within a solution trajectory. However, existing PRMs predominantly adopt a unidirectional left-to-right (L2R) evaluation paradigm, which limits their ability to leverage global context, making it challenging to verify the consistency of earlier steps based on later ones. In light of these challenges, we propose a novel bidirectional evaluation paradigm, named Bidirectional Process Reward Model (BiPRM). BiPRM seamlessly incorporates a parallel right-to-left (R2L) evaluation stream alongside the conventional L2R flow, enabling later reasoning steps to help assess earlier ones in real time. Notably, the built-in R2L evaluation is implemented solely through prompt modifications that reverse the original reasoning trajectory, without any additional parameters or inference latency introduced. This ensures BiPRM remains both efficient and broadly compatible with existing PRM studies. We conduct extensive experiments on two mathematical reasoning benchmarks using samples generated by three different policy models. Our method, BiPRM, is evaluated across three backbones and three distinct PRM objectives. Across all settings, BiPRM consistently outperforms unidirectional baselines, achieving up to a 31.9% improvement in stepwise reward evaluation. Generally, our results highlight BiPRM's effectiveness, robustness, and general applicability, offering a promising new direction for process-based reward modeling.