Direct Confidence Alignment: Aligning Verbalized Confidence with Internal Confidence In Large Language Models

📅 2025-12-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit systematic misalignment between internal token-probability-based confidence and their verbalized confidence statements, undermining calibration reliability and model transparency. To address this, we propose the first calibration paradigm explicitly targeting internal confidence as the alignment objective—departing from conventional accuracy-oriented approaches reliant on ground-truth labels. We introduce three novel calibration error metrics to quantify confidence alignment rigorously. Our method employs Direct Preference Optimization (DPO) to align verbalized confidence with internal token probabilities. We further establish a multi-model, multi-dataset evaluation framework to assess generalizability. Experiments across diverse open-weight LLMs demonstrate significant improvements in confidence alignment metrics and reduced verbalization inconsistency. Moreover, we uncover strong architectural dependencies in calibration efficacy, revealing that model-specific design choices critically influence alignment outcomes. This work establishes a model-adaptive pathway toward interpretable, confidence-aligned LLMs.

Technology Category

Application Category

📝 Abstract
Producing trustworthy and reliable Large Language Models (LLMs) has become increasingly important as their usage becomes more widespread. Calibration seeks to achieve this by improving the alignment between the model's confidence and the actual likelihood of its responses being correct or desirable. However, it has been observed that the internal confidence of a model, derived from token probabilities, is not well aligned with its verbalized confidence, leading to misleading results with different calibration methods. In this paper, we propose Direct Confidence Alignment (DCA), a method using Direct Preference Optimization to align an LLM's verbalized confidence with its internal confidence rather than ground-truth accuracy, enhancing model transparency and reliability by ensuring closer alignment between the two confidence measures. We evaluate DCA across multiple open-weight LLMs on a wide range of datasets. To further assess this alignment, we also introduce three new calibration error-based metrics. Our results show that DCA improves alignment metrics on certain model architectures, reducing inconsistencies in a model's confidence expression. However, we also show that it can be ineffective on others, highlighting the need for more model-aware approaches in the pursuit of more interpretable and trustworthy LLMs.
Problem

Research questions and friction points this paper is trying to address.

Aligns verbalized confidence with internal confidence in LLMs
Improves model transparency and reliability through confidence alignment
Reduces inconsistencies in confidence expression across model architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Direct Confidence Alignment aligns verbalized with internal confidence
Uses Direct Preference Optimization for confidence alignment
Introduces new calibration error metrics for evaluation
🔎 Similar Papers
No similar papers found.