🤖 AI Summary
This work addresses the inefficiency in training large language models for multi-step reasoning, which stems from sparse rewards and difficulty in credit assignment. To mitigate this, the authors propose a decoupled reward mechanism that separately evaluates reasoning processes and final outcomes. By introducing step-wise Marginal Information Gain (MIG) and a monotonic history watermark, they design a decoupled masking strategy and a dual-gated supervised fine-tuning (SFT) objective, effectively disentangling credit attribution between intermediate steps and final results. This enables stable and sample-efficient reinforcement learning. Evaluated on textual and multimodal benchmarks—including MATH and Super-CLEVR—the method significantly outperforms baselines such as GRPO, achieving superior performance in sample efficiency, final accuracy, and out-of-distribution zero-shot transfer capability.
📝 Abstract
Reinforcement Learning (RL) serves as a potent paradigm for enhancing reasoning capabilities in Large Language Models (LLMs), yet standard outcome-based approaches often suffer from reward sparsity and inefficient credit assignment. In this paper, we propose a novel framework designed to provide continuous reward signals, which introduces a Step-wise Marginal Information Gain (MIG) mechanism that quantifies the intrinsic value of reasoning steps against a Monotonic Historical Watermark, effectively filtering out training noise. To ensure disentangled credit distribution, we implement a Decoupled Masking Strategy, applying process-oriented rewards specifically to the chain-of-thought (CoT) and outcome-oriented rewards to the full completion. Additionally, we incorporate a Dual-Gated SFT objective to stabilize training with high-quality structural and factual signals. Extensive experiments across textual and multi-modal benchmarks (e.g., MATH, Super-CLEVR) demonstrate that our approach consistently outperforms baselines such as GRPO in both sample efficiency and final accuracy. Furthermore, our model exhibits superior out-of-distribution robustness, demonstrating promising zero-shot transfer capabilities to unseen and challenging reasoning tasks.