🤖 AI Summary
Existing methods for uncertainty quantification in large language models are either computationally expensive or rely on training data that is often inaccessible. This work proposes a lightweight approach that requires only a single forward–backward pass, leveraging a first-order Taylor expansion, gradient norms, and an isotropic covariance assumption over model parameters to efficiently disentangle epistemic and aleatoric uncertainty without access to training data. The method demonstrates strong alignment with MCMC-based benchmarks on synthetic tasks, with performance improving as model scale increases. It achieves state-of-the-art AUROC on TruthfulQA, while performing near-randomly on TriviaQA—suggesting it captures a distinct uncertainty signal rather than merely reflecting model confidence.
📝 Abstract
Existing methods for quantifying predictive uncertainty in neural networks are either computationally intractable for large language models or require access to training data that is typically unavailable. We derive a lightweight alternative through two approximations: a first-order Taylor expansion that expresses uncertainty in terms of the gradient of the prediction and the parameter covariance, and an isotropy assumption on the parameter covariance. Together, these yield epistemic uncertainty as the squared gradient norm and aleatoric uncertainty as the Bernoulli variance of the point prediction, from a single forward-backward pass through an unmodified pretrained model. We justify the isotropy assumption by showing that covariance estimates built from non-training data introduce structured distortions that isotropic covariance avoids, and that theoretical results on the spectral properties of large networks support the approximation at scale. Validation against reference Markov Chain Monte Carlo estimates on synthetic problems shows strong correspondence that improves with model size. We then use the estimates to investigate when each uncertainty type carries useful signal for predicting answer correctness in question answering with large language models, revealing a benchmark-dependent divergence: the combined estimate achieves the highest mean AUROC on TruthfulQA, where questions involve genuine conflict between plausible answers, but falls to near chance on TriviaQA's factual recall, suggesting that parameter-level uncertainty captures a fundamentally different signal than self-assessment methods.