Variational Uncertainty Decomposition for In-Context Learning

📅 2025-09-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of entangled uncertainty sources in large language models’ (LLMs) in-context learning (ICL), where epistemic uncertainty (arising from insufficient contextual information) and aleatoric uncertainty (stemming from inherent task noise) are conflated. We propose the first variational framework that explicitly disentangles these two uncertainty types. Methodologically, we introduce auxiliary queries as implicit probes to bypass explicit posterior sampling, enabling efficient estimation of tight upper and lower bounds for both uncertainties. Grounded in Bayesian modeling and variational inference, our approach is rigorously evaluated on synthetic benchmarks and real-world ICL tasks—including question answering and logical reasoning. Results demonstrate that the decomposed uncertainties exhibit strong discriminability, calibration, and interpretability, thereby significantly enhancing the reliability and trustworthiness of ICL predictions.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) gain popularity in conducting prediction tasks in-context, understanding the sources of uncertainty in in-context learning becomes essential to ensuring reliability. The recent hypothesis of in-context learning performing predictive Bayesian inference opens the avenue for Bayesian uncertainty estimation, particularly for decomposing uncertainty into epistemic uncertainty due to lack of in-context data and aleatoric uncertainty inherent in the in-context prediction task. However, the decomposition idea remains under-explored due to the intractability of the latent parameter posterior from the underlying Bayesian model. In this work, we introduce a variational uncertainty decomposition framework for in-context learning without explicitly sampling from the latent parameter posterior, by optimising auxiliary queries as probes to obtain an upper bound to the aleatoric uncertainty of an LLM's in-context learning procedure, which also induces a lower bound to the epistemic uncertainty. Through experiments on synthetic and real-world tasks, we show quantitatively and qualitatively that the decomposed uncertainties obtained from our method exhibit desirable properties of epistemic and aleatoric uncertainty.
Problem

Research questions and friction points this paper is trying to address.

Decomposing uncertainty in in-context learning
Estimating epistemic and aleatoric uncertainty sources
Avoiding latent parameter posterior sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Variational framework decomposes uncertainty without posterior sampling
Optimizes auxiliary queries as probes to bound uncertainties
Provides epistemic and aleatoric uncertainty decomposition for LLMs
🔎 Similar Papers
No similar papers found.