π€ AI Summary
This work addresses the susceptibility of existing reinforcement learning methods to advantage bias under uncertain rewards, which often leads models to become either overly conservative or overconfident, thereby impairing reliable uncertainty quantification and exacerbating hallucination. To tackle this issue, the paper introduces UCPO, a novel framework that, for the first time, uncovers the root causes linking reward manipulation to overconfidence. UCPO innovatively employs a tripartite advantage decoupling mechanism that separates deterministic and uncertain trajectories and normalizes them independently. Additionally, it incorporates a dynamic uncertainty-aware reward adjustment module to calibrate reward weights in real time. Experimental results demonstrate that UCPO significantly enhances model reliability and calibration on both mathematical reasoning and general tasks, effectively mitigating reward imbalance and improving trustworthy behavior beyond the modelβs knowledge boundary.
π Abstract
The key to building trustworthy Large Language Models (LLMs) lies in endowing them with inherent uncertainty expression capabilities to mitigate the hallucinations that restrict their high-stakes applications. However, existing RL paradigms such as GRPO often suffer from Advantage Bias due to binary decision spaces and static uncertainty rewards, inducing either excessive conservatism or overconfidence. To tackle this challenge, this paper unveils the root causes of reward hacking and overconfidence in current RL paradigms incorporating uncertainty-based rewards, based on which we propose the UnCertainty-Aware Policy Optimization (UCPO) framework. UCPO employs Ternary Advantage Decoupling to separate and independently normalize deterministic and uncertain rollouts, thereby eliminating advantage bias. Furthermore, a Dynamic Uncertainty Reward Adjustment mechanism is introduced to calibrate uncertainty weights in real-time according to model evolution and instance difficulty. Experimental results in mathematical reasoning and general tasks demonstrate that UCPO effectively resolves the reward imbalance, significantly improving the reliability and calibration of the model beyond their knowledge boundaries.