🤖 AI Summary
Existing vision-language models (VLMs) rely on deterministic embeddings, limiting their ability to capture the inherent many-to-many semantic relationships between images and text. This work proposes ProLIP—the first fully probabilistic vision-language pretraining framework—which abandons the point-to-point mapping assumption. It explicitly models semantic uncertainty and hierarchical inclusion relations via zero-overhead uncertainty tokens and a novel inclusion loss. The method integrates probabilistic contrastive learning, masked input distribution alignment, and end-to-end probabilistic training atop a ViT-B/16 backbone. On ImageNet, ProLIP achieves 74.6% zero-shot accuracy and 75.8% few-shot accuracy—outperforming deterministic baselines. Its uncertainty estimates align with human intuition (e.g., shorter captions yield higher uncertainty) and significantly improve robustness and interpretability across downstream tasks.
📝 Abstract
Vision-language models (VLMs) embed aligned image-text pairs into a joint space but often rely on deterministic embeddings, assuming a one-to-one correspondence between images and texts. This oversimplifies real-world relationships, which are inherently many-to-many, with multiple captions describing a single image and vice versa. We introduce Probabilistic Language-Image Pre-training (ProLIP), the first probabilistic VLM pre-trained on a billion-scale image-text dataset using only probabilistic objectives, achieving a strong zero-shot capability (e.g., 74.6% ImageNet zero-shot accuracy with ViT-B/16). ProLIP efficiently estimates uncertainty by an"uncertainty token"without extra parameters. We also introduce a novel inclusion loss that enforces distributional inclusion relationships between image-text pairs and between original and masked inputs. Experiments demonstrate that, by leveraging uncertainty estimates, ProLIP benefits downstream tasks and aligns with intuitive notions of uncertainty, e.g., shorter texts being more uncertain and more general inputs including specific ones. Utilizing text uncertainties, we further improve ImageNet accuracy from 74.6% to 75.8% (under a few-shot setting), supporting the practical advantages of our probabilistic approach. The code is available at https://github.com/naver-ai/prolip