UCPO: Uncertainty-Aware Policy Optimization

πŸ“… 2026-01-30
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the susceptibility of existing reinforcement learning methods to advantage bias under uncertain rewards, which often leads models to become either overly conservative or overconfident, thereby impairing reliable uncertainty quantification and exacerbating hallucination. To tackle this issue, the paper introduces UCPO, a novel framework that, for the first time, uncovers the root causes linking reward manipulation to overconfidence. UCPO innovatively employs a tripartite advantage decoupling mechanism that separates deterministic and uncertain trajectories and normalizes them independently. Additionally, it incorporates a dynamic uncertainty-aware reward adjustment module to calibrate reward weights in real time. Experimental results demonstrate that UCPO significantly enhances model reliability and calibration on both mathematical reasoning and general tasks, effectively mitigating reward imbalance and improving trustworthy behavior beyond the model’s knowledge boundary.

Technology Category

Application Category

πŸ“ Abstract
The key to building trustworthy Large Language Models (LLMs) lies in endowing them with inherent uncertainty expression capabilities to mitigate the hallucinations that restrict their high-stakes applications. However, existing RL paradigms such as GRPO often suffer from Advantage Bias due to binary decision spaces and static uncertainty rewards, inducing either excessive conservatism or overconfidence. To tackle this challenge, this paper unveils the root causes of reward hacking and overconfidence in current RL paradigms incorporating uncertainty-based rewards, based on which we propose the UnCertainty-Aware Policy Optimization (UCPO) framework. UCPO employs Ternary Advantage Decoupling to separate and independently normalize deterministic and uncertain rollouts, thereby eliminating advantage bias. Furthermore, a Dynamic Uncertainty Reward Adjustment mechanism is introduced to calibrate uncertainty weights in real-time according to model evolution and instance difficulty. Experimental results in mathematical reasoning and general tasks demonstrate that UCPO effectively resolves the reward imbalance, significantly improving the reliability and calibration of the model beyond their knowledge boundaries.
Problem

Research questions and friction points this paper is trying to address.

Uncertainty
Reward Hacking
Overconfidence
Advantage Bias
Large Language Models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-Aware Policy Optimization
Ternary Advantage Decoupling
Dynamic Uncertainty Reward Adjustment
Advantage Bias
Reward Calibration
πŸ”Ž Similar Papers
No similar papers found.
X
Xianzhou Zeng
Ant Group
J
Jing Huang
Zhejiang University
C
Chunmei Xie
Ant Group
G
Gongrui Nan
Ant Group
S
Siye Chen
Ant Group
M
Mengyu Lu
Ant Group
W
Weiqi Xiong
Ant Group
Q
Qixuan Zhou
Ant Group
Junhao Zhang
Junhao Zhang
National University of Singapore; Shandong University
Computer Vision
Q
Qiang Zhu
Zhejiang University
Y
Yadong Li
Ant Group
X
Xingzhong Xu
Ant Group