UAlign: Leveraging Uncertainty Estimations for Factuality Alignment on Large Language Models

📅 2024-12-16
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) frequently exhibit factual hallucinations or overconfidence due to ill-defined knowledge boundaries. Method: We propose the *Fact-Alignment Framework*, the first approach to jointly model confidence and semantic entropy as explicit, interpretable representations of knowledge boundaries—and inject these signals into prompts to guide output calibration. Furthermore, we design an uncertainty-aware reward model integrated with Proximal Policy Optimization (PPO) to enable the model to actively abstain from answering out-of-distribution queries. Contribution/Results: Our method significantly improves both accurate, confident responses to known facts and reliable abstention on unknown questions. It outperforms state-of-the-art prompt engineering and supervised fine-tuning baselines across in-domain and out-of-domain benchmarks, enhancing factual consistency, reliability, and generalization.

Technology Category

Application Category

📝 Abstract
Despite demonstrating impressive capabilities, Large Language Models (LLMs) still often struggle to accurately express the factual knowledge they possess, especially in cases where the LLMs' knowledge boundaries are ambiguous. To improve LLMs' factual expressions, we propose the UAlign framework, which leverages Uncertainty estimations to represent knowledge boundaries, and then explicitly incorporates these representations as input features into prompts for LLMs to Align with factual knowledge. First, we prepare the dataset on knowledge question-answering (QA) samples by calculating two uncertainty estimations, including confidence score and semantic entropy, to represent the knowledge boundaries for LLMs. Subsequently, using the prepared dataset, we train a reward model that incorporates uncertainty estimations and then employ the Proximal Policy Optimization (PPO) algorithm for factuality alignment on LLMs. Experimental results indicate that, by integrating uncertainty representations in LLM alignment, the proposed UAlign can significantly enhance the LLMs' capacities to confidently answer known questions and refuse unknown questions on both in-domain and out-of-domain tasks, showing reliability improvements and good generalizability over various prompt- and training-based baselines.
Problem

Research questions and friction points this paper is trying to address.

Improving LLMs' factual expression accuracy
Aligning LLMs with factual knowledge using uncertainty
Enhancing LLMs' reliability in answering known and unknown questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages uncertainty estimations for knowledge boundaries
Trains reward model with uncertainty and PPO
Enhances LLM factuality and refusal capabilities
Boyang Xue
Boyang Xue
Ph.D. Candidate in The Chinese University of Hong Kong
Natural Language ProcessingLarge Language ModelsSpeech Recognition
Fei Mi
Fei Mi
Huawei Noah's Ark Lab
LLM Post Training
Q
Qi Zhu
Huawei Noah’s Ark Lab
H
Hongru Wang
The Chinese University of Hong Kong
R
Rui Wang
The Chinese University of Hong Kong
S
Sheng Wang
The University of Hong Kong
Erxin Yu
Erxin Yu
The Hong Kong Polytechnic University
Xuming Hu
Xuming Hu
Assistant Professor, HKUST(GZ) / HKUST
Natural Language ProcessingLarge Language Model
K
Kam-Fai Wong
The Chinese University of Hong Kong