🤖 AI Summary
Current vision-language models (e.g., CLIP) exhibit insufficient semantic alignment robustness under negation and paraphrasing: minor lexical changes (e.g., negation) easily induce embedding misalignment, while semantically equivalent expressions (e.g., back-translation) fail to map consistently into the shared latent space. To address this, we propose SemCLIP—a novel framework integrating back-translation augmentation with controlled negative text generation to construct original–back-translated–negated triplets for training. We further design a new contrastive loss that jointly enforces semantic invariance (across paraphrases) and discriminability (between opposites). Leveraging LLMs for automatic, high-quality triplet construction, SemCLIP achieves a 10.0-percentage-point improvement in image retrieval accuracy on the CC-Neg benchmark (68.1% → 78.1%) and consistently outperforms CLIP in zero-shot classification. The method significantly enhances robustness to both negation and semantic-preserving transformations.
📝 Abstract
Contrastive vision-language models continue to be the dominant approach for image and text retrieval. Contrastive Language-Image Pre-training (CLIP) trains two neural networks in contrastive manner to align their image and text embeddings in a shared latent space. Recent results evaluating CLIP on negated or paraphrased text have shown mixed performance because negation changes meaning radically with minimal lexical changes, while paraphrasing can create very different textual expressions with the same intended meaning. This poses a significant challenge for improving the evaluation results and alignment of vision-language models. To address this challenge, this paper evaluates the combination of paraphrasing and negation, proposes a new CLIP contrastive loss function accounting for both paraphrasing and negation, and applies LLM-generated training triples consisting of original, paraphrased and negated textual captions to CLIP-like training models. The approach, called SemCLIP, is shown to move paraphrased captions towards the original image embeddings while pushing negated captions further away in embedding space. Empirically, SemCLIP is shown to be capable of preserving CLIP's performance while increasing considerably the distances to negated captions. On the CC-Neg benchmark using an original over negation image-retrieval accuracy metric, SemCLIP improves accuracy from 68.1% to 78.1%. Although results are mixed when compared with CLIP on the Sugarcrepe++ benchmark, SemCLIP's performance is generally better than the models trained with negated captions. This robustness to negation extends to downstream zero-shot classification tasks where SemCLIP pre-trained on Sugarcrepe++ performs better than CLIP on all tested downstream tasks. These results indicate that SemCLIP can achieve significant robustness to semantic transformations.