Influential Training Data Retrieval for Explaining Verbalized Confidence of LLMs

📅 2026-01-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the frequent misalignment between the stated confidence of large language models (LLMs) and their factual accuracy, which undermines their reliability. To this end, the authors propose TracVC, a novel method that leverages training data provenance to explain the origins of LLM confidence statements. By integrating information retrieval with influence estimation, TracVC traces the specific training samples underlying a model’s confidence assertions and introduces a new metric—content groundedness—to evaluate whether such confidence is rooted in relevant factual content. Experiments reveal that the confidence of OLMo2-13B is often driven by irrelevant training data, indicating a tendency to mimic superficial linguistic patterns rather than rely on substantive knowledge. This finding exposes a fundamental limitation in current LLM training paradigms.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) can increase users'perceived trust by verbalizing confidence in their outputs. However, prior work has shown that LLMs are often overconfident, making their stated confidence unreliable since it does not consistently align with factual accuracy. To better understand the sources of this verbalized confidence, we introduce TracVC (\textbf{Trac}ing \textbf{V}erbalized \textbf{C}onfidence), a method that builds on information retrieval and influence estimation to trace generated confidence expressions back to the training data. We evaluate TracVC on OLMo and Llama models in a question answering setting, proposing a new metric, content groundness, which measures the extent to which an LLM grounds its confidence in content-related training examples (relevant to the question and answer) versus in generic examples of confidence verbalization. Our analysis reveals that OLMo2-13B is frequently influenced by confidence-related data that is lexically unrelated to the query, suggesting that it may mimic superficial linguistic expressions of certainty rather than rely on genuine content grounding. These findings point to a fundamental limitation in current training regimes: LLMs may learn how to sound confident without learning when confidence is justified. Our analysis provides a foundation for improving LLMs'trustworthiness in expressing more reliable confidence.
Problem

Research questions and friction points this paper is trying to address.

verbalized confidence
overconfidence
training data influence
content grounding
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

TracVC
verbalized confidence
influential training data
content groundness
large language models
Y
Yuxi Xia
Faculty of Computer Science, University of Vienna, Vienna, Austria
L
Loris Schoenegger
Faculty of Computer Science, University of Vienna, Vienna, Austria
Benjamin Roth
Benjamin Roth
University of Vienna
Natural Language ProcessingMachine Learning