Good Reasoning Makes Good Demonstrations: Implicit Reasoning Quality Supervision via In-Context Reinforcement Learning

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses a critical limitation in existing reinforcement learning approaches for training large language models: the tendency to treat all correct answers equally, thereby inadvertently reinforcing low-quality reasoning trajectories that happen to yield correct outcomes. To overcome this, the authors propose a novel in-context reinforcement learning framework, In-Context RLVR, which introduces demonstration utility—the pedagogical value of a reasoning trajectory—as an intrinsic quality signal. Leveraging the model’s own in-context learning capability, the method employs a Bayesian-inspired evidence gain metric to implicitly reweight rewards without requiring external evaluators or additional computational overhead. Experimental results demonstrate that In-Context RLVR simultaneously improves both answer accuracy and reasoning quality on mathematical reasoning benchmarks, significantly outperforming standard RLVR baselines.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning with Verifiable Rewards (RLVR) improves reasoning in large language models but treats all correct solutions equally, potentially reinforcing flawed traces that get correct answers by chance. We observe that better reasoning are better teachers: high-quality solutions serve as more effective demonstrations than low-quality ones. We term this teaching ability Demonstration Utility, and show that the policy model's own in-context learning ability provides an efficient way to measure it, yielding a quality signal termed Evidence Gain. To employ this signal during training, we introduce In-Context RLVR. By Bayesian analysis, we show that this objective implicitly reweights rewards by Evidence Gain, assigning higher weights to high-quality traces and lower weights to low-quality ones, without requiring costly computation or external evaluators. Experiments on mathematical benchmarks show improvements in both accuracy and reasoning quality over standard RLVR.
Problem

Research questions and friction points this paper is trying to address.

reasoning quality
demonstration utility
in-context learning
reinforcement learning
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-Context RLVR
Demonstration Utility
Evidence Gain
Implicit Reward Reweighting
Reasoning Quality
🔎 Similar Papers
No similar papers found.
T
Tiehua Mei
School of Data Science, Fudan University
M
Minxuan Lv
University of Chinese Academy of Sciences
Leiyu Pan
Leiyu Pan
Tianjin University
Natural Language ProcessingMultilingualMachine Translation
Zhenpeng Su
Zhenpeng Su
Chinese Academy of Sciences; Kuaishou
Mixture-of-ExpertsReinforcement Learning
H
Hongru Hou
School of Data Science, Fudan University
H
Hengrui Chen
School of Data Science, Fudan University
A
Ao Xu
School of Data Science, Fudan University
Deqing Yang
Deqing Yang
School of Data Science, Fudan University