Post-hoc Probabilistic Vision-Language Models

📅 2024-12-08
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (VLMs) such as CLIP lack principled uncertainty quantification under domain shift, as their deterministic input-to-embedding mappings cannot capture posterior uncertainty induced by distributional shifts. To address this, we propose the first post-hoc Bayesian approximation framework for VLMs that requires no retraining. By imposing learnable Gaussian priors on the final-layer text and image embedding parameters, we analytically derive the posterior distribution of cosine similarity—enabling rigorous uncertainty quantification. Our method supports plug-and-play uncertainty calibration and interpretable analysis, and guides high-quality support-set selection in active learning. Experiments demonstrate significant improvements: a 32% reduction in expected calibration error (ECE) and a +4.7% gain in downstream task accuracy under equal sample budgets. The approach delivers reliable, calibrated uncertainty estimates critical for safety-critical applications.

Technology Category

Application Category

📝 Abstract
Vision-language models (VLMs), such as CLIP and SigLIP, have found remarkable success in classification, retrieval, and generative tasks. For this, VLMs deterministically map images and text descriptions to a joint latent space in which their similarity is assessed using the cosine similarity. However, a deterministic mapping of inputs fails to capture uncertainties over concepts arising from domain shifts when used in downstream tasks. In this work, we propose post-hoc uncertainty estimation in VLMs that does not require additional training. Our method leverages a Bayesian posterior approximation over the last layers in VLMs and analytically quantifies uncertainties over cosine similarities. We demonstrate its effectiveness for uncertainty quantification and support set selection in active learning. Compared to baselines, we obtain improved and well-calibrated predictive uncertainties, interpretable uncertainty estimates, and sample-efficient active learning. Our results show promise for safety-critical applications of large-scale models.
Problem

Research questions and friction points this paper is trying to address.

Estimating uncertainties in vision-language models without retraining.
Quantifying uncertainties over cosine similarities in VLMs.
Improving active learning with interpretable uncertainty estimates.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-hoc uncertainty estimation without additional training
Bayesian posterior approximation for VLM layers
Analytical quantification of cosine similarity uncertainties
🔎 Similar Papers
No similar papers found.