Vision-Language Models Do Not Understand Negation

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current vision-language models (VLMs) exhibit poor comprehension of negation semantics (e.g., “not”, “no”), limiting their practicality in cross-modal tasks requiring binary presence/absence discrimination. Method: We introduce NegBench, the first multimodal benchmark dedicated to negation understanding—spanning images, videos, and medical data—with 79K samples and 18 task variants. Through systematic evaluation, we find mainstream VLMs perform near chance-level on negation-based retrieval and multiple-choice description tasks. To address this, we propose a novel large-scale synthetic negation data fine-tuning paradigm built upon CLIP, integrating contrastive learning and cross-modal alignment. Contribution/Results: Fine-tuned models achieve a 10% absolute gain in negation query recall and a 40% improvement in negation multiple-choice accuracy, demonstrating substantial enhancement in negation reasoning capability across modalities.

Technology Category

Application Category

📝 Abstract
Many practical vision-language applications require models that understand negation, e.g., when using natural language to retrieve images which contain certain objects but not others. Despite advancements in vision-language models (VLMs) through large-scale training, their ability to comprehend negation remains underexplored. This study addresses the question: how well do current VLMs understand negation? We introduce NegBench, a new benchmark designed to evaluate negation understanding across 18 task variations and 79k examples spanning image, video, and medical datasets. The benchmark consists of two core tasks designed to evaluate negation understanding in diverse multimodal settings: Retrieval with Negation and Multiple Choice Questions with Negated Captions. Our evaluation reveals that modern VLMs struggle significantly with negation, often performing at chance level. To address these shortcomings, we explore a data-centric approach wherein we finetune CLIP models on large-scale synthetic datasets containing millions of negated captions. We show that this approach can result in a 10% increase in recall on negated queries and a 40% boost in accuracy on multiple-choice questions with negated captions.
Problem

Research questions and friction points this paper is trying to address.

Visual Understanding
Negation Concept
Image Captioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

NegBench
CLIP Fine-tuning
Negation Understanding
🔎 Similar Papers
No similar papers found.