🤖 AI Summary
This work addresses a critical limitation in existing reinforcement learning approaches for training large language models: the tendency to treat all correct answers equally, thereby inadvertently reinforcing low-quality reasoning trajectories that happen to yield correct outcomes. To overcome this, the authors propose a novel in-context reinforcement learning framework, In-Context RLVR, which introduces demonstration utility—the pedagogical value of a reasoning trajectory—as an intrinsic quality signal. Leveraging the model’s own in-context learning capability, the method employs a Bayesian-inspired evidence gain metric to implicitly reweight rewards without requiring external evaluators or additional computational overhead. Experimental results demonstrate that In-Context RLVR simultaneously improves both answer accuracy and reasoning quality on mathematical reasoning benchmarks, significantly outperforming standard RLVR baselines.
📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) improves reasoning in large language models but treats all correct solutions equally, potentially reinforcing flawed traces that get correct answers by chance. We observe that better reasoning are better teachers: high-quality solutions serve as more effective demonstrations than low-quality ones. We term this teaching ability Demonstration Utility, and show that the policy model's own in-context learning ability provides an efficient way to measure it, yielding a quality signal termed Evidence Gain. To employ this signal during training, we introduce In-Context RLVR. By Bayesian analysis, we show that this objective implicitly reweights rewards by Evidence Gain, assigning higher weights to high-quality traces and lower weights to low-quality ones, without requiring costly computation or external evaluators. Experiments on mathematical benchmarks show improvements in both accuracy and reasoning quality over standard RLVR.