Towards Harmonized Uncertainty Estimation for Large Language Models

📅 2025-05-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing large language models (LLMs) suffer from a fundamental trade-off among weak informativeness, poor balance, and inadequate calibration in uncertainty estimation—hindering their trustworthy deployment. To address this, we propose CUE, a lightweight correction framework that enables the first cross-model and cross-task adaptive rescaling of uncertainty scores. CUE trains a small, task-agnostic correction model on performance-aligned data derived from the target LLM’s outputs, and performs post-hoc calibration by jointly leveraging internal reasoning traces and generative features. Extensive evaluation across diverse LLMs (e.g., Llama, Qwen, Phi-3) and tasks—including fact verification, logical reasoning, and open-ended generation—demonstrates that CUE substantially improves uncertainty estimation quality: key metrics surpass state-of-the-art methods by up to 60%. Moreover, CUE consistently enhances predictive reliability and decision robustness without modifying the base LLM or requiring retraining.

Technology Category

Application Category

📝 Abstract
To facilitate robust and trustworthy deployment of large language models (LLMs), it is essential to quantify the reliability of their generations through uncertainty estimation. While recent efforts have made significant advancements by leveraging the internal logic and linguistic features of LLMs to estimate uncertainty scores, our empirical analysis highlights the pitfalls of these methods to strike a harmonized estimation between indication, balance, and calibration, which hinders their broader capability for accurate uncertainty estimation. To address this challenge, we propose CUE (Corrector for Uncertainty Estimation): A straightforward yet effective method that employs a lightweight model trained on data aligned with the target LLM's performance to adjust uncertainty scores. Comprehensive experiments across diverse models and tasks demonstrate its effectiveness, which achieves consistent improvements of up to 60% over existing methods.
Problem

Research questions and friction points this paper is trying to address.

Quantify reliability of LLM generations via uncertainty estimation
Address pitfalls in harmonizing indication, balance, and calibration
Improve accuracy of uncertainty scores across diverse models/tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight model adjusts uncertainty scores
Data aligned with target LLM performance
Improves existing methods by 60%
🔎 Similar Papers
No similar papers found.
R
Rui Li
Peking University
J
Jing Long
Peking University
M
Muge Qi
Peking University
Heming Xia
Heming Xia
Natural Language Processing Group, The Hong Kong Polytechnic University
Natural Language ProcessingLarge Language Models
Lei Sha
Lei Sha
Prof@Beihang University, Prof@ZGC Lab, Oxtium AI, University of Oxford
NLPML
P
Peiyi Wang
Peking University
Z
Zhifang Sui
Peking University