🤖 AI Summary
This work addresses the frequent misalignment between the stated confidence of large language models (LLMs) and their factual accuracy, which undermines their reliability. To this end, the authors propose TracVC, a novel method that leverages training data provenance to explain the origins of LLM confidence statements. By integrating information retrieval with influence estimation, TracVC traces the specific training samples underlying a model’s confidence assertions and introduces a new metric—content groundedness—to evaluate whether such confidence is rooted in relevant factual content. Experiments reveal that the confidence of OLMo2-13B is often driven by irrelevant training data, indicating a tendency to mimic superficial linguistic patterns rather than rely on substantive knowledge. This finding exposes a fundamental limitation in current LLM training paradigms.
📝 Abstract
Large language models (LLMs) can increase users'perceived trust by verbalizing confidence in their outputs. However, prior work has shown that LLMs are often overconfident, making their stated confidence unreliable since it does not consistently align with factual accuracy. To better understand the sources of this verbalized confidence, we introduce TracVC (\textbf{Trac}ing \textbf{V}erbalized \textbf{C}onfidence), a method that builds on information retrieval and influence estimation to trace generated confidence expressions back to the training data. We evaluate TracVC on OLMo and Llama models in a question answering setting, proposing a new metric, content groundness, which measures the extent to which an LLM grounds its confidence in content-related training examples (relevant to the question and answer) versus in generic examples of confidence verbalization. Our analysis reveals that OLMo2-13B is frequently influenced by confidence-related data that is lexically unrelated to the query, suggesting that it may mimic superficial linguistic expressions of certainty rather than rely on genuine content grounding. These findings point to a fundamental limitation in current training regimes: LLMs may learn how to sound confident without learning when confidence is justified. Our analysis provides a foundation for improving LLMs'trustworthiness in expressing more reliable confidence.