Decoupling Reasoning and Confidence: Resurrecting Calibration in Reinforcement Learning from Verifiable Rewards

📅 2026-03-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the issue of overconfidence in large language models trained via reinforcement learning with verifiable rewards (RLVR), where models often exhibit degraded calibration and assign high confidence to incorrect answers. The study is the first to reveal a gradient conflict between optimizing policy accuracy and minimizing calibration error. To resolve this, the authors propose the Decoupled Confidence and Policy Optimization (DCPO) framework, which disentangles confidence estimation from reasoning objectives and jointly optimizes both goals. Experiments demonstrate that DCPO achieves calibration performance significantly superior to existing methods while maintaining reasoning accuracy comparable to GRPO, thereby effectively mitigating overconfidence without compromising task performance.

Technology Category

Application Category

📝 Abstract
Reinforcement Learning from Verifiable Rewards (RLVR) significantly enhances large language models (LLMs) reasoning but severely suffers from calibration degeneration, where models become excessively over-confident in incorrect answers. Previous studies devote to directly incorporating calibration objective into existing optimization target. However, our theoretical analysis demonstrates that there exists a fundamental gradient conflict between the optimization for maximizing policy accuracy and minimizing calibration error. Building on this insight, we propose DCPO, a simple yet effective framework that systematically decouples reasoning and calibration objectives. Extensive experiments demonstrate that our DCPO not only preserves accuracy on par with GRPO but also achieves the best calibration performance and substantially mitigates the over-confidence issue. Our study provides valuable insights and practical solution for more reliable LLM deployment.
Problem

Research questions and friction points this paper is trying to address.

calibration degeneration
over-confidence
Reinforcement Learning from Verifiable Rewards
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoupling
Calibration
Reinforcement Learning from Verifiable Rewards
Gradient Conflict
Over-confidence Mitigation
🔎 Similar Papers
No similar papers found.
Z
Zhengzhao Ma
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Xueru Wen
Xueru Wen
School of Computer Science and Technology, University of Chinese Academy of Sciences
Natural Language ProcessingAlignmentLarge Language Model
Boxi Cao
Boxi Cao
Institute of Software, Chinese Academy of Sciences
Natural Language Processing
Yaojie Lu
Yaojie Lu
Institute of Software, Chinese Academy of Sciences
Information ExtractionLarge Language Models
Hongyu Lin
Hongyu Lin
Institute of Software, Chinese Academy of Sciences
Natural Language ProcessingInformation Extraction and Machine Learning
J
Jinglin Yang
Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100085, China; School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100085, China
M
Min He
National Computer Network Emergency Response Technical Team/Coordination Center of China, Beijing 100029, China
X
Xianpei Han
Chinese Information Processing Laboratory, Institute of Software, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
Le Sun
Le Sun
Institute of Software, CAS
information_retrievalnatural_language_processing