From Absolute to Relative: Rethinking Reward Shaping in Group-Based Reinforcement Learning

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges in group-based reinforcement learning, where reliance on absolute rewards often leads to sparse supervision signals and unstable reward estimates, thereby degrading advantage estimation. To mitigate these issues, the authors propose a Relative Ranking-based Reward Reshaping framework (RLRR), which innovatively shifts reward modeling from absolute values to intra-group relative rankings. Central to this approach is the Ranking Reward Modelβ€”a listwise preference model explicitly designed for group-level optimization. By leveraging relative preferences within groups, RLRR effectively alleviates signal sparsity and instability. Empirical results demonstrate that the proposed method significantly outperforms standard group-based reinforcement learning baselines across both reasoning benchmarks and open-ended generation tasks, yielding more stable and higher-performing policy optimization.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning has become a cornerstone for enhancing the reasoning capabilities of Large Language Models, where group-based approaches such as GRPO have emerged as efficient paradigms that optimize policies by leveraging intra-group performance differences. However, these methods typically rely on absolute numerical rewards, introducing intrinsic limitations. In verifiable tasks, identical group evaluations often result in sparse supervision, while in open-ended scenarios, the score range instability of reward models undermines advantage estimation based on group means. To address these limitations, we propose Reinforcement Learning with Relative Rewards (RLRR), a framework that shifts reward shaping from absolute scoring to relative ranking. Complementing this framework, we introduce the Ranking Reward Model, a listwise preference model tailored for group-based optimization to directly generate relative rankings. By transforming raw evaluations into robust relative signals, RLRR effectively mitigates signal sparsity and reward instability. Experimental results demonstrate that RLRR yields consistent performance improvements over standard group-based baselines across reasoning benchmarks and open-ended generation tasks.
Problem

Research questions and friction points this paper is trying to address.

reward shaping
group-based reinforcement learning
absolute rewards
reward instability
sparse supervision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Relative Reward
Group-Based Reinforcement Learning
Reward Shaping
Ranking Reward Model
Listwise Preference
πŸ”Ž Similar Papers
No similar papers found.
W
Wenzhe Niu
Tianjin University, Tianjin, China; Meituan, Beijing, China
W
Wei He
Meituan, Beijing, China
Z
Zongxia Xie
Tianjin University, Tianjin, China; Meituan, Beijing, China
J
Jinpeng Ou
Meituan, Beijing, China
H
Huichuan Fan
Meituan, Beijing, China
Y
Yuchen Ge
Meituan, Beijing, China
Yanru Sun
Yanru Sun
Tianjin University
Machine LearningTime Series ForecastingLarge Language Model
Z
Ziyin Wang
Tianjin University, Tianjin, China
Y
Yizhao Sun
Meituan, Beijing, China
C
Chengshun Shi
Meituan, Beijing, China
J
Jiuchong Gao
Meituan, Beijing, China
J
Jinghua Hao
Meituan, Beijing, China
R
Renqing He
Meituan, Beijing, China