🤖 AI Summary
This work addresses the challenge of sparse and unverifiable reward signals in reinforcement learning for general-domain reasoning tasks, which hinders effective supervision of the reasoning process. To overcome this limitation, the authors propose a self-supervised framework that obviates the need for external reward models or human annotations. The approach synthesizes and filters high-quality reference reasoning chains and introduces a Path Fidelity Reward (PFR) based on conditional probabilities, delivering fine-grained, dense probabilistic rewards at each reasoning step. By flexibly integrating process-level rewards with final-answer rewards, the method significantly outperforms strong baselines on reading comprehension and medical question-answering benchmarks, demonstrating its effectiveness and generalization capability across diverse reasoning tasks.
📝 Abstract
While reinforcement learning with verifiable rewards (RLVR) has advanced LLM reasoning in structured domains like mathematics and programming, its application to general-domain reasoning tasks remains challenging due to the absence of verifiable reward signals. To this end, methods like Reinforcement Learning with Reference Probability Reward (RLPR) have emerged, leveraging the probability of generating the final answer as a reward signal. However, these outcome-focused approaches neglect crucial step-by-step supervision of the reasoning process itself. To address this gap, we introduce Probabilistic Process Supervision (P2S), a novel self-supervision framework that provides fine-grained process rewards without requiring a separate reward model or human-annotated reasoning steps. During reinforcement learning, P2S synthesizes and filters a high-quality reference reasoning chain (gold-CoT). The core of our method is to calculate a Path Faithfulness Reward (PFR) for each reasoning step, which is derived from the conditional probability of generating the gold-CoT's suffix, given the model's current reasoning prefix. Crucially, this PFR can be flexibly integrated with any outcome-based reward, directly tackling the reward sparsity problem by providing dense guidance. Extensive experiments on reading comprehension and medical Question Answering benchmarks show that P2S significantly outperforms strong baselines.