Save the Good Prefix: Precise Error Penalization via Process-Supervised RL to Enhance LLM Reasoning

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses key limitations in existing reinforcement learning approaches for language model reasoning, which rely on sparse outcome-based rewards and struggle to credit partially correct intermediate steps. Furthermore, current process reward models (PRMs) suffer from misaligned objectives due to noisy scores that do not reflect their actual usage. To overcome these issues, the authors propose Verifiable Prefix Policy Optimization (VPPO), a novel method that leverages process supervision to precisely identify the first error in a reasoning trajectory. VPPO partitions each trajectory into a verified correct prefix and an erroneous suffix, applying rewards exclusively to the prefix and targeted penalties starting at the error point. This yields stable, interpretable learning signals that substantially alleviate the credit assignment problem. Empirical results demonstrate that VPPO consistently outperforms both sparse-reward RL and existing PRM-guided methods across multiple reasoning benchmarks in both Pass@1 and Pass@K metrics.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has emerged as a powerful framework for improving the reasoning capabilities of large language models (LLMs). However, most existing RL approaches rely on sparse outcome rewards, which fail to credit correct intermediate steps in partially successful solutions. Process reward models (PRMs) offer fine-grained step-level supervision, but their scores are often noisy and difficult to evaluate. As a result, recent PRM benchmarks focus on a more objective capability: detecting the first incorrect step in a reasoning path. However, this evaluation target is misaligned with how PRMs are typically used in RL, where their step-wise scores are treated as raw rewards to maximize. To bridge this gap, we propose Verifiable Prefix Policy Optimization (VPPO), which uses PRMs only to localize the first error during RL. Given an incorrect rollout, VPPO partitions the trajectory into a verified correct prefix and an erroneous suffix based on the first error, rewarding the former while applying targeted penalties only after the detected mistake. This design yields stable, interpretable learning signals and improves credit assignment. Across multiple reasoning benchmarks, VPPO consistently outperforms sparse-reward RL and prior PRM-guided baselines on both Pass@1 and Pass@K.
Problem

Research questions and friction points this paper is trying to address.

reinforcement learning
large language models
reasoning
process reward models
credit assignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Process-Supervised RL
Verifiable Prefix Policy Optimization
Error Localization
Credit Assignment
Reasoning Enhancement
🔎 Similar Papers