π€ AI Summary
This work addresses the challenges in group-based reinforcement learning, where reliance on absolute rewards often leads to sparse supervision signals and unstable reward estimates, thereby degrading advantage estimation. To mitigate these issues, the authors propose a Relative Ranking-based Reward Reshaping framework (RLRR), which innovatively shifts reward modeling from absolute values to intra-group relative rankings. Central to this approach is the Ranking Reward Modelβa listwise preference model explicitly designed for group-level optimization. By leveraging relative preferences within groups, RLRR effectively alleviates signal sparsity and instability. Empirical results demonstrate that the proposed method significantly outperforms standard group-based reinforcement learning baselines across both reasoning benchmarks and open-ended generation tasks, yielding more stable and higher-performing policy optimization.
π Abstract
Reinforcement learning has become a cornerstone for enhancing the reasoning capabilities of Large Language Models, where group-based approaches such as GRPO have emerged as efficient paradigms that optimize policies by leveraging intra-group performance differences. However, these methods typically rely on absolute numerical rewards, introducing intrinsic limitations. In verifiable tasks, identical group evaluations often result in sparse supervision, while in open-ended scenarios, the score range instability of reward models undermines advantage estimation based on group means. To address these limitations, we propose Reinforcement Learning with Relative Rewards (RLRR), a framework that shifts reward shaping from absolute scoring to relative ranking. Complementing this framework, we introduce the Ranking Reward Model, a listwise preference model tailored for group-based optimization to directly generate relative rankings. By transforming raw evaluations into robust relative signals, RLRR effectively mitigates signal sparsity and reward instability. Experimental results demonstrate that RLRR yields consistent performance improvements over standard group-based baselines across reasoning benchmarks and open-ended generation tasks.