Modeling Caption Diversity in Contrastive Vision-Language Pretraining

📅 2024-04-30
🏛️ International Conference on Machine Learning
📈 Citations: 13
Influential: 0
📄 PDF
🤖 AI Summary
Existing CLIP-style models enforce a one-to-one mapping of image–text pairs onto single embedding vectors, limiting their capacity to capture the semantic diversity inherent in natural language captions. To address this, we propose Llip, the first framework for contrastive vision–language pretraining that explicitly models caption semantic diversity. Llip introduces a text-conditioned visual feature mixing mechanism, enabling the visual encoder to generate multi-granularity features dynamically modulated by textual inputs—thereby facilitating fine-grained alignment between images and their diverse descriptions. Built upon the ViT-G/14 backbone and trained via contrastive learning, Llip achieves 83.5% zero-shot accuracy on ImageNet (+1.4%), improves cross-modal retrieval performance on MS-COCO by 6.0%, and yields an average zero-shot gain of 2.9% across benchmarks. These results demonstrate significantly enhanced representational richness and generalization capability.

Technology Category

Application Category

📝 Abstract
There are a thousand ways to caption an image. Contrastive Language Pretraining (CLIP) on the other hand, works by mapping an image and its caption to a single vector -- limiting how well CLIP-like models can represent the diverse ways to describe an image. In this work, we introduce Llip, Latent Language Image Pretraining, which models the diversity of captions that could match an image. Llip's vision encoder outputs a set of visual features that are mixed into a final representation by conditioning on information derived from the text. We show that Llip outperforms non-contextualized baselines like CLIP and SigLIP on a variety of tasks even with large-scale encoders. Llip improves zero-shot classification by an average of 2.9% zero-shot classification benchmarks with a ViT-G/14 encoder. Specifically, Llip attains a zero-shot top-1 accuracy of 83.5% on ImageNet outperforming a similarly sized CLIP by 1.4%. We also demonstrate improvement on zero-shot retrieval on MS-COCO by 6.0%. We provide a comprehensive analysis of the components introduced by the method and demonstrate that Llip leads to richer visual representations.
Problem

Research questions and friction points this paper is trying to address.

Modeling diverse image captions in contrastive vision-language pretraining
Overcoming CLIP's single-vector limitation for caption diversity
Improving zero-shot classification and retrieval with richer representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Llip models diverse captions for images
Visual features mixed with text conditioning
Outperforms CLIP on zero-shot tasks
🔎 Similar Papers
No similar papers found.