Contrastive vision-language learning with paraphrasing and negation

📅 2025-11-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (e.g., CLIP) exhibit insufficient semantic alignment robustness under negation and paraphrasing: minor lexical changes (e.g., negation) easily induce embedding misalignment, while semantically equivalent expressions (e.g., back-translation) fail to map consistently into the shared latent space. To address this, we propose SemCLIP—a novel framework integrating back-translation augmentation with controlled negative text generation to construct original–back-translated–negated triplets for training. We further design a new contrastive loss that jointly enforces semantic invariance (across paraphrases) and discriminability (between opposites). Leveraging LLMs for automatic, high-quality triplet construction, SemCLIP achieves a 10.0-percentage-point improvement in image retrieval accuracy on the CC-Neg benchmark (68.1% → 78.1%) and consistently outperforms CLIP in zero-shot classification. The method significantly enhances robustness to both negation and semantic-preserving transformations.

Technology Category

Application Category

📝 Abstract
Contrastive vision-language models continue to be the dominant approach for image and text retrieval. Contrastive Language-Image Pre-training (CLIP) trains two neural networks in contrastive manner to align their image and text embeddings in a shared latent space. Recent results evaluating CLIP on negated or paraphrased text have shown mixed performance because negation changes meaning radically with minimal lexical changes, while paraphrasing can create very different textual expressions with the same intended meaning. This poses a significant challenge for improving the evaluation results and alignment of vision-language models. To address this challenge, this paper evaluates the combination of paraphrasing and negation, proposes a new CLIP contrastive loss function accounting for both paraphrasing and negation, and applies LLM-generated training triples consisting of original, paraphrased and negated textual captions to CLIP-like training models. The approach, called SemCLIP, is shown to move paraphrased captions towards the original image embeddings while pushing negated captions further away in embedding space. Empirically, SemCLIP is shown to be capable of preserving CLIP's performance while increasing considerably the distances to negated captions. On the CC-Neg benchmark using an original over negation image-retrieval accuracy metric, SemCLIP improves accuracy from 68.1% to 78.1%. Although results are mixed when compared with CLIP on the Sugarcrepe++ benchmark, SemCLIP's performance is generally better than the models trained with negated captions. This robustness to negation extends to downstream zero-shot classification tasks where SemCLIP pre-trained on Sugarcrepe++ performs better than CLIP on all tested downstream tasks. These results indicate that SemCLIP can achieve significant robustness to semantic transformations.
Problem

Research questions and friction points this paper is trying to address.

CLIP models struggle with negated text meaning changes
Paraphrased text creates alignment challenges in vision-language models
Current models show mixed performance on semantic transformations
Innovation

Methods, ideas, or system contributions that make the work stand out.

New CLIP loss function for paraphrasing and negation
LLM-generated training triples with varied captions
Adjusts embeddings to handle semantic transformations
🔎 Similar Papers
No similar papers found.
K
Kwun Ho Ngan
Fujitsu Research of Europe, United Kingdom
S
Saman Sadeghi Afgeh
City St George’s, University of London, United Kingdom
J
Joe Townsend
Fujitsu Research of Europe, United Kingdom
Artur d'Avila Garcez
Artur d'Avila Garcez
Professor of Computer Science. City, University of London
Neurosymbolic AIMachine LearningNeural Computation