Semantic Volume: Quantifying and Detecting both External and Internal Uncertainty in LLMs

📅 2025-02-28
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from hallucinations due to either internal uncertainties—such as knowledge gaps or contradictions—or external uncertainties—such as ambiguous user queries. Existing methods model these two uncertainty sources separately. Method: We propose Semantic Volume, a general, unsupervised, black-box approach that requires no model access. It quantifies both internal and external uncertainty jointly by perturbing query and response embeddings in semantic space, constructing their Gram matrix, and computing its determinant—the “semantic volume.” Contribution/Results: This is the first method to jointly model input ambiguity (external) and knowledge uncertainty (internal). Geometrically grounded, semantic volume admits a theoretical interpretation via differential entropy, unifying and extending sampling-based semantic entropy methods. Experiments demonstrate consistent superiority over baselines across both internal and external uncertainty detection tasks, with strong robustness, black-box compatibility, and interpretability—significantly enhancing LLM output reliability.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated remarkable performance across diverse tasks by encoding vast amounts of factual knowledge. However, they are still prone to hallucinations, generating incorrect or misleading information, often accompanied by high uncertainty. Existing methods for hallucination detection primarily focus on quantifying internal uncertainty, which arises from missing or conflicting knowledge within the model. However, hallucinations can also stem from external uncertainty, where ambiguous user queries lead to multiple possible interpretations. In this work, we introduce Semantic Volume, a novel mathematical measure for quantifying both external and internal uncertainty in LLMs. Our approach perturbs queries and responses, embeds them in a semantic space, and computes the determinant of the Gram matrix of the embedding vectors, capturing their dispersion as a measure of uncertainty. Our framework provides a generalizable and unsupervised uncertainty detection method without requiring white-box access to LLMs. We conduct extensive experiments on both external and internal uncertainty detection, demonstrating that our Semantic Volume method consistently outperforms existing baselines in both tasks. Additionally, we provide theoretical insights linking our measure to differential entropy, unifying and extending previous sampling-based uncertainty measures such as the semantic entropy. Semantic Volume is shown to be a robust and interpretable approach to improving the reliability of LLMs by systematically detecting uncertainty in both user queries and model responses.
Problem

Research questions and friction points this paper is trying to address.

Quantify external and internal uncertainty in LLMs
Detect hallucinations caused by ambiguous user queries
Improve reliability of LLMs through unsupervised uncertainty detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semantic Volume quantifies external and internal uncertainty.
Perturbs queries, embeds in semantic space, computes Gram determinant.
Unsupervised, generalizable, outperforms existing uncertainty detection methods.
🔎 Similar Papers
No similar papers found.