Beyond Correctness: Harmonizing Process and Outcome Rewards through RL Training

📅 2025-09-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the coarse-grained nature of outcome reward models (ORMs) and the vulnerability of process reward models (PRMs) to adversarial exploitation in mathematical reasoning, this paper proposes PROF: a consistency-driven sample filtering method that jointly leverages PRM’s fine-grained stepwise supervision and ORM’s global outcome evaluation—bypassing noise and reward hacking inherent in naive weighted fusion. Its core innovation lies in dynamically selecting high-quality reasoning samples based on alignment between process and outcome rewards, thereby enabling balanced training. Within a reinforcement learning framework, response selection and optimization are performed using the average process reward. Experiments demonstrate that PROF achieves over a 4% absolute accuracy improvement over hybrid baselines, while significantly enhancing correctness and coherence of intermediate reasoning steps.

Technology Category

Application Category

📝 Abstract
Reinforcement learning with verifiable rewards (RLVR) has emerged to be a predominant paradigm for mathematical reasoning tasks, offering stable improvements in reasoning ability. However, Outcome Reward Models (ORMs) in RLVR are too coarse-grained to distinguish flawed reasoning within correct answers or valid reasoning within incorrect answers. This lack of granularity introduces noisy and misleading gradients significantly and hinders further progress in reasoning process quality. While Process Reward Models (PRMs) offer fine-grained guidance for intermediate steps, they frequently suffer from inaccuracies and are susceptible to reward hacking. To resolve this dilemma, we introduce PRocess cOnsistency Filter (PROF), an effective data process curation method that harmonizes noisy, fine-grained process rewards with accurate, coarse-grained outcome rewards. Rather than naively blending PRM and ORM in the objective function (arXiv:archive/2506.18896), PROF leverages their complementary strengths through consistency-driven sample selection. Our approach retains correct responses with higher averaged process values and incorrect responses with lower averaged process values, while maintaining positive/negative training sample balance. Extensive experiments demonstrate that our method not only consistently improves the final accuracy over $4%$ compared to the blending approaches, but also strengthens the quality of intermediate reasoning steps. Codes and training recipes are available at https://github.com/Chenluye99/PROF.
Problem

Research questions and friction points this paper is trying to address.

Harmonizing noisy process rewards with accurate outcome rewards
Resolving coarse-grained outcome rewards lacking reasoning granularity
Addressing process reward inaccuracies and reward hacking susceptibility
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combines process and outcome rewards
Uses consistency-driven sample selection
Filters data to harmonize rewards
🔎 Similar Papers
No similar papers found.