Rewards as Labels: Revisiting RLVR from a Classification Perspective

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical limitations in existing reinforcement learning from verifiable rewards (RLVR) methods—such as GRPO—where policy updates suffer from positive-sample gradient mismatch and negative-sample gradient dominance, leading to inefficient training and suboptimal performance. To overcome these issues, the authors propose REAL, a novel framework that reformulates policy optimization as a classification problem by treating verifiable rewards as categorical labels rather than scalar weights. REAL introduces anchor logits and a binary cross-entropy loss to enforce monotonic and bounded gradient allocation, effectively mitigating gradient mismatch. Evaluated on mathematical reasoning tasks, REAL consistently outperforms GRPO, DAPO, and other baselines, achieving average Pass@1 improvements of 6.7% and 6.2% on 1.5B and 7B models, respectively, while demonstrating more stable and efficient training dynamics.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards has recently advanced the capabilities of Large Language Models in complex reasoning tasks by providing explicit rule-based supervision. Among RLVR methods, GRPO and its variants have achieved strong empirical performance. Despite their success, we identify that they suffer from Gradient Misassignment in Positives and Gradient Domination in Negatives, which lead to inefficient and suboptimal policy updates. To address these issues, we propose Rewards as Labels (REAL), a novel framework that revisits verifiable rewards as categorical labels rather than scalar weights, thereby reformulating policy optimization as a classification problem. Building on this, we further introduce anchor logits to enhance policy learning. Our analysis reveals that REAL induces a monotonic and bounded gradient weighting, enabling balanced gradient allocation across rollouts and effectively mitigating the identified mismatches. Extensive experiments on mathematical reasoning benchmarks show that REAL improves training stability and consistently outperforms GRPO and strong variants such as DAPO. On the 1.5B model, REAL improves average Pass@1 over DAPO by 6.7%. These gains further scale to 7B model, REAL continues to outperform DAPO and GSPO by 6.2% and 1.7%, respectively. Notably, even with a vanilla binary cross-entropy, REAL remains stable and exceeds DAPO by 4.5% on average.
Problem

Research questions and friction points this paper is trying to address.

Gradient Misassignment
Gradient Domination
Reinforcement Learning with Verifiable Rewards
Policy Optimization
RLVR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rewards as Labels
Classification Perspective
Gradient Allocation
Verifiable Rewards
Policy Optimization
🔎 Similar Papers
No similar papers found.
Z
Zepeng Zhai
Xiaohongshu Inc.
M
Meilin Chen
Xiaohongshu Inc.
Jiaxuan Zhao
Jiaxuan Zhao
Xidian University
J
Junlang Qian
Nanyang Technological University
L
Lei Shen
Xiaohongshu Inc.
Yuan Lu
Yuan Lu
I-squared-R
BlockchainsDistributed ComputingDecentralization