🤖 AI Summary
Vision-language models (e.g., CLIP) often assign high confidence to incorrect predictions in open-vocabulary classification, compromising reliability in safety-critical applications. To address this, we propose a training-free, post-hoc uncertainty estimation method. Our approach introduces class-specific probabilistic embeddings: leveraging image encoder features, it constructs multivariate Gaussian distributions in the projection space to model intra-class visual consistency. These embeddings enable plug-and-play confidence calibration—robust to distributional shift—and require only ~10 samples per class for effective operation. Evaluated on benchmarks including ImageNet and Flowers102, our method substantially outperforms both deterministic and probabilistic baselines, achieving state-of-the-art performance in error detection.
📝 Abstract
Vision-language models (VLMs), such as CLIP, have gained popularity for their strong open vocabulary classification performance, but they are prone to assigning high confidence scores to misclassifications, limiting their reliability in safety-critical applications. We introduce a training-free, post-hoc uncertainty estimation method for contrastive VLMs that can be used to detect erroneous predictions. The key to our approach is to measure visual feature consistency within a class, using feature projection combined with multivariate Gaussians to create class-specific probabilistic embeddings. Our method is VLM-agnostic, requires no fine-tuning, demonstrates robustness to distribution shift, and works effectively with as few as 10 training images per class. Extensive experiments on ImageNet, Flowers102, Food101, EuroSAT and DTD show state-of-the-art error detection performance, significantly outperforming both deterministic and probabilistic VLM baselines. Code is available at https://github.com/zhenxianglin/ICPE.