🤖 AI Summary
This work addresses a critical limitation in existing verifiable reward-based reinforcement learning methods, which neglect the model’s intrinsic uncertainty and consequently treat high- and low-uncertainty solutions equivalently, impeding effective reasoning path optimization. To resolve this, we propose the EGPO framework, which introduces— for the first time—a metacognitive entropy calibration mechanism. Without modifying the verifier or reward definition, EGPO constructs a zero-overhead entropy proxy from token-level likelihoods and employs asymmetric calibration to distinguish correct reasoning from overconfident errors, enabling uncertainty-aware and stable policy optimization. Experimental results demonstrate that our approach consistently and significantly improves the performance of large reasoning models across multiple benchmarks, effectively mitigating the mismatch between uncertainty estimation and reward signals.
📝 Abstract
Large reasoning models (LRMs) have emerged as a powerful paradigm for solving complex real-world tasks. In practice, these models are predominantly trained via Reinforcement Learning with Verifiable Rewards (RLVR), yet most existing outcome-only RLVR pipelines rely almost exclusively on a binary correctness signal and largely ignore the model's intrinsic uncertainty. We term this discrepancy the uncertainty-reward mismatch, under which high- and low-uncertainty solutions are treated equivalently, preventing the policy from "Know What You Know" and impeding the shift from optimizing for correct answers to optimizing effective reasoning paths. This limitation is especially critical in reasoning-centric tasks such as mathematics and question answering, where performance hinges on the quality of the model's internal reasoning process rather than mere memorization of final answers. To address this, we propose EGPO, a metacognitive entropy calibration framework that explicitly integrates intrinsic uncertainty into RLVR for enhancing LRMs. EGPO estimates per-sample uncertainty using a zero-overhead entropy proxy derived from token-level likelihoods and aligns it with extrinsic correctness through an asymmetric calibration mechanism that preserves correct reasoning while selectively regulating overconfident failures, thereby enabling stable and uncertainty-aware policy optimization. Moreover, EGPO recovers informative learning signals from otherwise degenerate group-based rollouts without modifying the verifier or reward definition. Extensive experiments across multiple benchmarks demonstrate that the proposed EGPO leads to substantial and consistent improvements in reasoning performance, establishing a principled path for advancing LRMs through metacognitive entropy calibration.