R^3: Replay, Reflection, and Ranking Rewards for LLM Reinforcement Learning

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the instability and inefficiency in reinforcement learning training of large language models for complex mathematical reasoning, which stems from the collapse of intra-group advantages. To mitigate this issue, the authors propose the R³ mechanism, which preserves intra-group advantages through cross-context replay, leverages failed trajectories for in-context self-reflection, and introduces a token-level structural entropy–based ranking reward function to effectively reduce advantage estimation bias. The proposed approach significantly enhances training stability and sample efficiency, achieving state-of-the-art performance on multiple mathematical reasoning benchmarks with the Deepseek-R1-Distill-Qwen-1.5B model while requiring fewer reasoning steps and substantially outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
Large reasoning models (LRMs) aim to solve diverse and complex problems through structured reasoning. Recent advances in group-based policy optimization methods have shown promise in enabling stable advantage estimation without reliance on process-level annotations. However, these methods rely on advantage gaps induced by high-quality samples within the same batch, which makes the training process fragile and inefficient when intra-group advantages collapse under challenging tasks. To address these problems, we propose a reinforcement learning mechanism named \emph{\textbf{R^3}} that along three directions: (1) a \emph{cross-context \underline{\textbf{R}}eplay} strategy that maintains the intra-group advantage by recalling valuable examples from historical trajectories of the same query, (2) an \emph{in-context self-\underline{\textbf{R}}eflection} mechanism enabling models to refine outputs by leveraging past failures, and (3) a \emph{structural entropy \underline{\textbf{R}}anking reward}, which assigns relative rewards to truncated or failed samples by ranking responses based on token-level entropy patterns, capturing both local exploration and global stability. We implement our method on Deepseek-R1-Distill-Qwen-1.5B and train it on the DeepscaleR-40k in the math domain. Experiments demonstrate our method achieves SoTA performance on several math benchmarks, representing significant improvements and fewer reasoning tokens over the base models. Code and model will be released.
Problem

Research questions and friction points this paper is trying to address.

Large Reasoning Models
Group-based Policy Optimization
Advantage Estimation
Reinforcement Learning
Intra-group Advantage Collapse
Innovation

Methods, ideas, or system contributions that make the work stand out.

Replay
Reflection
Ranking Rewards
Structural Entropy
Group-based Policy Optimization
🔎 Similar Papers
No similar papers found.