🤖 AI Summary
In rule-based reward RLHF, multi-response result consistency causes advantage function degeneration and gradient vanishing, severely hindering training efficiency and performance of large language models on complex reasoning tasks. To address this, we propose Consistency-Aware Policy Optimization (CAPSO), a novel framework with three core contributions: (1) a global reward grounded in result consistency, preserving effective learning signals for high-consistency samples; (2) an entropy-driven soft mixing mechanism that dynamically balances local exploration and global convergence; and (3) integrated regularized advantage estimation with policy gradient optimization. Extensive experiments on multiple mathematical reasoning benchmarks demonstrate significant improvements over strong baselines, validating CAPSO’s robustness and generalizability across diverse reasoning tasks. The implementation is publicly available.
📝 Abstract
Reinforcement learning has significantly enhanced the reasoning capabilities of Large Language Models (LLMs) in complex problem-solving tasks. Recently, the introduction of DeepSeek R1 has inspired a surge of interest in leveraging rule-based rewards as a low-cost alternative for computing advantage functions and guiding policy optimization. However, a common challenge observed across many replication and extension efforts is that when multiple sampled responses under a single prompt converge to identical outcomes, whether correct or incorrect, the group-based advantage degenerates to zero. This leads to vanishing gradients and renders the corresponding samples ineffective for learning, ultimately limiting training efficiency and downstream performance. To address this issue, we propose a consistency-aware policy optimization framework that introduces a structured global reward based on outcome consistency, the global loss based on it ensures that, even when model outputs show high intra-group consistency, the training process still receives meaningful learning signals, which encourages the generation of correct and self-consistent reasoning paths from a global perspective. Furthermore, we incorporate an entropy-based soft blending mechanism that adaptively balances local advantage estimation with global optimization, enabling dynamic transitions between exploration and convergence throughout training. Our method introduces several key innovations in both reward design and optimization strategy. We validate its effectiveness through substantial performance gains on multiple mathematical reasoning benchmarks, highlighting the proposed framework's robustness and general applicability. Code of this work has been released at https://github.com/hijih/copo-code.git.