PRL: Process Reward Learning Improves LLMs'Reasoning Ability and Broadens the Reasoning Boundary

📅 2026-01-15
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing approaches predominantly rely on trajectory-level outcome rewards, lacking fine-grained supervision over the reasoning process and often requiring additional components such as Monte Carlo tree search (MCTS) or external reward models, which compromises training efficiency and transparency. This work proposes Process Reward Learning (PRL), which, for the first time, derives a theoretically grounded form of step-level process rewards from an entropy-regularized reinforcement learning objective, thereby transforming sparse global outcome rewards into dense process-level supervision signals. PRL requires no auxiliary modules; instead, it achieves efficient policy optimization by aligning the policy with a reference model through KL divergence regularization. Experiments demonstrate that PRL significantly enhances both the average reasoning capability and the diversity of successful reasoning paths in large language models, as measured by average@n and pass@n metrics.

Technology Category

Application Category

📝 Abstract
Improving the reasoning abilities of Large Language Models (LLMs) has been a continuous topic recently. But most relevant works are based on outcome rewards at the trajectory level, missing fine-grained supervision during the reasoning process. Other existing training frameworks that try to combine process signals together to optimize LLMs also rely heavily on tedious additional steps like MCTS, training a separate reward model, etc., doing harm to the training efficiency. Moreover, the intuition behind the process signals design lacks rigorous theoretical support, leaving the understanding of the optimization mechanism opaque. In this paper, we propose Process Reward Learning (PRL), which decomposes the entropy regularized reinforcement learning objective into intermediate steps, with rigorous process rewards that could be assigned to models accordingly. Starting from theoretical motivation, we derive the formulation of PRL that is essentially equivalent to the objective of reward maximization plus a KL-divergence penalty term between the policy model and a reference model. However, PRL could turn the outcome reward into process supervision signals, which helps better guide the exploration during RL optimization. From our experiment results, we demonstrate that PRL not only improves the average performance for LLMs'reasoning ability measured by average @ n, but also broadens the reasoning boundary by improving the pass @ n metric. Extensive experiments show the effectiveness of PRL could be verified and generalized.
Problem

Research questions and friction points this paper is trying to address.

reasoning ability
process reward
large language models
reinforcement learning
outcome reward
Innovation

Methods, ideas, or system contributions that make the work stand out.

Process Reward Learning
Reasoning Ability
Reinforcement Learning
Fine-grained Supervision
Entropy Regularization
🔎 Similar Papers
No similar papers found.