🤖 AI Summary
Large Vision-Language Models (LVLMs) frequently generate outputs that appear superficially plausible yet lack semantic reliability, necessitating robust semantic uncertainty estimation. Existing clustering-based semantic consistency methods suffer from poor stability under lexical perturbations. To address this, we propose Semantic Gaussian Process Uncertainty (SGPU): a novel framework that replaces fragile clustering with a geometric semantic consistency metric derived from the eigen-spectrum of the answer-embedding Gram matrix. SGPU constructs a Gaussian process classifier adaptable to both black-box and white-box settings, and supports cross-model and cross-modal transfer. Evaluated on six LVLMs/LLMs and eight benchmarks spanning VQA, image classification, and text-based QA, SGPU achieves state-of-the-art performance—significantly improving calibration (reducing Expected Calibration Error) and discrimination (increasing AUROC and AUARC)—while demonstrating strong generalization across models and tasks.
📝 Abstract
Large Vision-Language Models (LVLMs) often produce plausible but unreliable outputs, making robust uncertainty estimation essential. Recent work on semantic uncertainty estimates relies on external models to cluster multiple sampled responses and measure their semantic consistency. However, these clustering methods are often fragile, highly sensitive to minor phrasing variations, and can incorrectly group or separate semantically similar answers, leading to unreliable uncertainty estimates. We propose Semantic Gaussian Process Uncertainty (SGPU), a Bayesian framework that quantifies semantic uncertainty by analyzing the geometric structure of answer embeddings, avoiding brittle clustering. SGPU maps generated answers into a dense semantic space, computes the Gram matrix of their embeddings, and summarizes their semantic configuration via the eigenspectrum. This spectral representation is then fed into a Gaussian Process Classifier that learns to map patterns of semantic consistency to predictive uncertainty, and that can be applied in both black-box and white-box settings. Across six LLMs and LVLMs on eight datasets spanning VQA, image classification, and textual QA, SGPU consistently achieves state-of-the-art calibration (ECE) and discriminative (AUROC, AUARC) performance. We further show that SGPU transfers across models and modalities, indicating that its spectral representation captures general patterns of semantic uncertainty.