No Hard Negatives Required: Concept Centric Learning Leads to Compositionality without Degrading Zero-shot Capabilities of Contrastive Models

πŸ“… 2026-03-26
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing contrastive vision-language models in compositional representation learning and their reliance on hard negative samples, which can degrade zero-shot and retrieval performance. The authors propose a novel approach that eliminates the need for hard negatives by leveraging standard NLP tools to generate concept-centric textual descriptions. Corresponding visual embeddings are extracted from an image encoder via parameter-free cross-modal attentional pooling, complemented by a lightweight contrastive loss. This method significantly enhances compositional generalization without increasing inference overhead, achieving state-of-the-art results on standard compositional benchmarks while preserving or even improving zero-shot classification and image–text retrieval performance.

Technology Category

Application Category

πŸ“ Abstract
Contrastive vision-language (V&L) models remain a popular choice for various applications. However, several limitations have emerged, most notably the limited ability of V&L models to learn compositional representations. Prior methods often addressed this limitation by generating custom training data to obtain hard negative samples. Hard negatives have been shown to improve performance on compositionality tasks, but are often specific to a single benchmark, do not generalize, and can cause substantial degradation of basic V&L capabilities such as zero-shot or retrieval performance, rendering them impractical. In this work we follow a different approach. We identify two root causes that limit compositionality performance of V&Ls: 1) Long training captions do not require a compositional representation; and 2) The final global pooling in the text and image encoders lead to a complete loss of the necessary information to learn binding in the first place. As a remedy, we propose two simple solutions: 1) We obtain short concept centric caption parts using standard NLP software and align those with the image; and 2) We introduce a parameter-free cross-modal attention-pooling to obtain concept centric visual embeddings from the image encoder. With these two changes and simple auxiliary contrastive losses, we obtain SOTA performance on standard compositionality benchmarks, while maintaining or improving strong zero-shot and retrieval capabilities. This is achieved without increasing inference cost. We release the code for this work at https://github.com/SamsungLabs/concept_centric_clip.
Problem

Research questions and friction points this paper is trying to address.

compositionality
contrastive vision-language models
zero-shot capabilities
hard negatives
compositional representations
Innovation

Methods, ideas, or system contributions that make the work stand out.

concept-centric learning
compositionality
contrastive vision-language models
cross-modal attention pooling
zero-shot capability
πŸ”Ž Similar Papers
2023-11-28European Conference on Computer VisionCitations: 4