Miner:Mining Intrinsic Mastery for Data-Efficient RL in Large Reasoning Models

πŸ“… 2026-01-08
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inefficiency of existing critic-free reinforcement learning methods under positive homogeneous prompts, where all trajectories appear correct, leading to zero advantage estimates and stalled training. To overcome this, the authors propose a critic-free framework that leverages the policy model’s intrinsic uncertainty as a self-supervised reward signal, eliminating the need for external supervision or additional inference overhead. The approach introduces two key innovations: a token-level focused credit assignment mechanism that dynamically amplifies gradients for critical uncertain tokens while suppressing overconfident ones, and an adaptive advantage calibration technique that seamlessly integrates intrinsic rewards with verifiable external rewards. Evaluated on Qwen3-4B and Qwen3-8B, the method achieves state-of-the-art performance across six reasoning benchmarks, improving Pass@1 by up to 4.58% and Pass@K by up to 6.66% over GRPO.

Technology Category

Application Category

πŸ“ Abstract
Current critic-free RL methods for large reasoning models suffer from severe inefficiency when training on positive homogeneous prompts (where all rollouts are correct), resulting in waste of rollouts due to zero advantage estimates. We introduce a radically simple yet powerful solution to \uline{M}ine \uline{in}trinsic mast\uline{er}y (Miner), that repurposes the policy's intrinsic uncertainty as a self-supervised reward signal, with no external supervision, auxiliary models, or additional inference cost. Our method pioneers two key innovations: (1) a token-level focal credit assignment mechanism that dynamically amplifies gradients on critical uncertain tokens while suppressing overconfident ones, and (2) adaptive advantage calibration to seamlessly integrate intrinsic and verifiable rewards. Evaluated across six reasoning benchmarks on Qwen3-4B and Qwen3-8B base models, Miner achieves state-of-the-art performance among the other four algorithms, yielding up to \textbf{4.58} absolute gains in Pass@1 and \textbf{6.66} gains in Pass@K compared to GRPO. Comparison with other methods targeted at exploration enhancement further discloses the superiority of the two newly proposed innovations. This demonstrates that latent uncertainty exploitation is both necessary and sufficient for efficient and scalable RL training of reasoning models.
Problem

Research questions and friction points this paper is trying to address.

data-efficient RL
large reasoning models
positive homogeneous prompts
advantage estimation
intrinsic uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

intrinsic uncertainty
token-level credit assignment
adaptive advantage calibration
data-efficient RL
self-supervised reward
πŸ”Ž Similar Papers
No similar papers found.