🤖 AI Summary
CLIP-like models exhibit strong zero-shot classification and retrieval capabilities but suffer from limited compositional reasoning—particularly in understanding semantic relationships between concepts. Existing approaches often enhance lexical sensitivity at the expense of semantic depth and downstream retrieval performance.
Method: We propose CLIC, the first multi-image–multi-text joint contrastive training framework. It integrates cross-sample image–text alignment with a novel relation-aware loss within an efficient fine-tuning pipeline, simultaneously strengthening both lexical- and semantic-level compositionality without compromising zero-shot retrieval accuracy. CLIC is architecture-agnostic and compatible with diverse CLIP variants and pretraining strategies.
Results: On the SugarCrepe++ benchmark, CLIC achieves state-of-the-art performance in compositional reasoning, significantly outperforming the current best model CLIPS. It uniquely enables synergistic improvement in both compositional generalization and retrieval effectiveness.
📝 Abstract
Vision-language models like CLIP have demonstrated remarkable zero-shot capabilities in classification and retrieval. However, these models often struggle with compositional reasoning - the ability to understand the relationships between concepts. A recent benchmark, SugarCrepe++, reveals that previous works on improving compositionality have mainly improved lexical sensitivity but neglected semantic understanding. In addition, downstream retrieval performance often deteriorates, although one would expect that improving compositionality should enhance retrieval. In this work, we introduce CLIC (Compositionally-aware Learning in CLIP), a fine-tuning method based on a novel training technique combining multiple images and their associated captions. CLIC improves compositionality across architectures as well as differently pre-trained CLIP models, both in terms of lexical and semantic understanding, and achieves consistent gains in retrieval performance. This even applies to the recent CLIPS, which achieves SOTA retrieval performance. Nevertheless, the short fine-tuning with CLIC leads to an improvement in retrieval and to the best compositional CLIP model on SugarCrepe++. All our models and code are available at https://clic-compositional-clip.github.io