Can Large Language Models Express Uncertainty Like Human?

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) pose safety risks in high-stakes applications due to overconfidence, while existing confidence estimation methods suffer from reliance on inaccessible logits, prohibitive computational cost of multiple sampling, and misalignment with human communication norms via numeric outputs. Method: We propose *Linguistic Confidence*, a novel paradigm that models uncertainty through natural language hedging expressions (e.g., “possibly”, “roughly”). We curate the first large-scale human-annotated dataset of such linguistic uncertainty markers; design a lightweight mapping module to convert hedging terms into calibrated confidence scores; and introduce a joint framework combining prompt optimization and parameter-efficient fine-tuning. Contribution/Results: Experiments on question-answering tasks demonstrate substantial improvements in both reliability and discriminative power of uncertainty expression. Our approach achieves near-optimal confidence calibration without requiring logits or costly sampling—effectively bridging the gap between interpretable, human-aligned uncertainty communication and rigorous statistical calibration.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly used in high-stakes settings, where overconfident responses can mislead users. Reliable confidence estimation has been shown to enhance trust and task accuracy. Yet existing methods face practical barriers: logits are often hidden, multi-sampling is computationally expensive, and verbalized numerical uncertainty (e.g., giving a 0-100 score) deviates from natural communication. We revisit linguistic confidence (LC), where models express uncertainty through hedging language (e.g., probably, might), offering a lightweight and human-centered alternative. To advance this direction, we (1) release the first diverse, large-scale dataset of hedging expressions with human-annotated confidence scores, and (2) propose a lightweight mapper that converts hedges into confidence scores at near-zero cost. Building on these resources, we (3) conduct the first systematic study of LC across modern LLMs and QA benchmarks, revealing that while most LLMs underperform in expressing reliable LC, carefully designed prompting achieves competitive calibration and discriminability. Finally, we (4) introduce a fine-tuning framework that further improves LC reliability. Taken together, our work positions linguistic confidence as a scalable, efficient, and human-aligned approach to LLM uncertainty estimation, and calls for deeper exploration of this promising yet underexplored direction.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' ability to express uncertainty through natural language
Developing lightweight methods to convert hedging expressions into confidence scores
Improving reliability of linguistic confidence in LLMs through systematic study
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dataset of hedging expressions with human-annotated confidence scores
Lightweight mapper converts hedges into confidence scores
Fine-tuning framework improves linguistic confidence reliability
🔎 Similar Papers
No similar papers found.
Linwei Tao
Linwei Tao
University of Sydney
Confidence CalibrationLarge Language ModelTrustworthy Machine Learning
Y
Yi-Fan Yeh
School of Computer Science, University of Sydney, Australia
B
Bo Kai
School of Computer Science, University of Sydney, Australia
Minjing Dong
Minjing Dong
Assistant Professor of Computer Science, City University of Hong Kong
Computer VisionAdversarial RobustnessGenerative ModelModel CalibrationEfficient model
T
Tao Huang
Shanghai Jiao Tong University, Shanghai, China
T
Tom A. Lamb
Department of Engineering Science, University of Oxford, UK
J
Jialin Yu
Department of Engineering Science, University of Oxford, UK
P
Philip H. S. Torr
Department of Engineering Science, University of Oxford, UK
C
Chang Xu
School of Computer Science, University of Sydney, Australia