ConfTuner: Training Large Language Models to Express Their Confidence Verbally

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit “overconfidence”—producing high-confidence textual outputs despite incorrect answers—posing critical reliability risks in high-stakes domains such as science, law, and healthcare. To address this, we propose ConfTuner, a method that introduces a tokenized Brier score loss: a proper scoring rule requiring no ground-truth confidence labels and theoretically guaranteed to improve calibration. ConfTuner enables end-to-end calibration of text-based confidence outputs via fine-tuning and is compatible with black-box LLMs (e.g., GPT-4o). Extensive experiments demonstrate that ConfTuner significantly improves confidence calibration across diverse reasoning tasks. Moreover, it delivers consistent performance gains in downstream applications—including self-correction and model cascading—without architectural modifications. By enabling reliable, interpretable confidence estimation, ConfTuner establishes a new paradigm for deploying trustworthy LLMs in safety-critical scenarios.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) are increasingly deployed in high-stakes domains such as science, law, and healthcare, where accurate expressions of uncertainty are essential for reliability and trust. However, current LLMs are often observed to generate incorrect answers with high confidence, a phenomenon known as "overconfidence". Recent efforts have focused on calibrating LLMs' verbalized confidence: i.e., their expressions of confidence in text form, such as "I am 80% confident that...". Existing approaches either rely on prompt engineering or fine-tuning with heuristically generated uncertainty estimates, both of which have limited effectiveness and generalizability. Motivated by the notion of proper scoring rules for calibration in classical machine learning models, we introduce ConfTuner, a simple and efficient fine-tuning method that introduces minimal overhead and does not require ground-truth confidence scores or proxy confidence estimates. ConfTuner relies on a new loss function, tokenized Brier score, which we theoretically prove to be a proper scoring rule, intuitively meaning that it "correctly incentivizes the model to report its true probability of being correct". ConfTuner improves calibration across diverse reasoning tasks and generalizes to black-box models such as GPT-4o. Our results further show that better-calibrated confidence enables downstream gains in self-correction and model cascade, advancing the development of trustworthy LLM systems. The code is available at https://github.com/liushiliushi/ConfTuner.
Problem

Research questions and friction points this paper is trying to address.

LLMs often express overconfidence in incorrect answers
Existing calibration methods lack effectiveness and generalizability
Need for trustworthy uncertainty expression in high-stakes domains
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning method with tokenized Brier score loss
Proper scoring rule without ground-truth confidence
Generalizes calibration to black-box models
🔎 Similar Papers
No similar papers found.