Fine-Grained Uncertainty Decomposition in Large Language Models: A Spectral Approach

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Addressing the challenge of reliably disentangling aleatoric from epistemic uncertainty in large language models (LLMs), this paper proposes a spectral-analysis-based fine-grained uncertainty decomposition method. It introduces, for the first time, von Neumann entropy—originally from quantum information theory—into LLM uncertainty modeling, establishing a semantic similarity-aware frequency-domain decomposition framework. Specifically, spectral entropy is employed to characterize the model’s output distribution in the frequency domain, while joint integration with semantic embeddings enables interpretable separation of the two uncertainty types. Evaluated on multiple state-of-the-art LLMs (e.g., Llama, Qwen) and benchmark datasets (UncertaintyQA, AmbiQA), the method achieves superior performance over existing approaches in both total and aleatoric uncertainty estimation, attaining new state-of-the-art results. This work establishes a novel paradigm for trustworthy LLM inference grounded in principled uncertainty quantification.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) are increasingly integrated in diverse applications, obtaining reliable measures of their predictive uncertainty has become critically important. A precise distinction between aleatoric uncertainty, arising from inherent ambiguities within input data, and epistemic uncertainty, originating exclusively from model limitations, is essential to effectively address each uncertainty source. In this paper, we introduce Spectral Uncertainty, a novel approach to quantifying and decomposing uncertainties in LLMs. Leveraging the Von Neumann entropy from quantum information theory, Spectral Uncertainty provides a rigorous theoretical foundation for separating total uncertainty into distinct aleatoric and epistemic components. Unlike existing baseline methods, our approach incorporates a fine-grained representation of semantic similarity, enabling nuanced differentiation among various semantic interpretations in model responses. Empirical evaluations demonstrate that Spectral Uncertainty outperforms state-of-the-art methods in estimating both aleatoric and total uncertainty across diverse models and benchmark datasets.
Problem

Research questions and friction points this paper is trying to address.

Decomposing predictive uncertainty into aleatoric and epistemic components
Quantifying uncertainty using spectral methods from quantum information theory
Providing fine-grained semantic differentiation in language model uncertainty
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spectral Uncertainty uses Von Neumann entropy
Decomposes total uncertainty into distinct components
Incorporates fine-grained semantic similarity representation
🔎 Similar Papers
No similar papers found.