🤖 AI Summary
Large language models (LLMs) pose safety risks in high-stakes applications due to overconfidence, while existing confidence estimation methods suffer from reliance on inaccessible logits, prohibitive computational cost of multiple sampling, and misalignment with human communication norms via numeric outputs.
Method: We propose *Linguistic Confidence*, a novel paradigm that models uncertainty through natural language hedging expressions (e.g., “possibly”, “roughly”). We curate the first large-scale human-annotated dataset of such linguistic uncertainty markers; design a lightweight mapping module to convert hedging terms into calibrated confidence scores; and introduce a joint framework combining prompt optimization and parameter-efficient fine-tuning.
Contribution/Results: Experiments on question-answering tasks demonstrate substantial improvements in both reliability and discriminative power of uncertainty expression. Our approach achieves near-optimal confidence calibration without requiring logits or costly sampling—effectively bridging the gap between interpretable, human-aligned uncertainty communication and rigorous statistical calibration.
📝 Abstract
Large language models (LLMs) are increasingly used in high-stakes settings, where overconfident responses can mislead users. Reliable confidence estimation has been shown to enhance trust and task accuracy. Yet existing methods face practical barriers: logits are often hidden, multi-sampling is computationally expensive, and verbalized numerical uncertainty (e.g., giving a 0-100 score) deviates from natural communication. We revisit linguistic confidence (LC), where models express uncertainty through hedging language (e.g., probably, might), offering a lightweight and human-centered alternative. To advance this direction, we (1) release the first diverse, large-scale dataset of hedging expressions with human-annotated confidence scores, and (2) propose a lightweight mapper that converts hedges into confidence scores at near-zero cost. Building on these resources, we (3) conduct the first systematic study of LC across modern LLMs and QA benchmarks, revealing that while most LLMs underperform in expressing reliable LC, carefully designed prompting achieves competitive calibration and discriminability. Finally, we (4) introduce a fine-tuning framework that further improves LC reliability. Taken together, our work positions linguistic confidence as a scalable, efficient, and human-aligned approach to LLM uncertainty estimation, and calls for deeper exploration of this promising yet underexplored direction.