When Negation Is a Geometry Problem in Vision-Language Models

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge that vision–language models such as CLIP struggle to interpret negation in textual descriptions (e.g., “no logo”). It reveals, for the first time, the existence of a geometric direction in CLIP’s embedding space that corresponds to negation semantics. Building on this insight, the authors propose a test-time intervention method that adjusts representations along this direction without requiring model fine-tuning, thereby enabling negation-aware inference. To reliably evaluate negation understanding, they introduce a novel benchmark framework that leverages multimodal large language models as judges. Experimental results demonstrate that the proposed approach significantly improves the model’s ability to comprehend negated semantics, particularly on out-of-distribution image–text pairs.

Technology Category

Application Category

📝 Abstract
Joint Vision-Language Embedding models such as CLIP typically fail at understanding negation in text queries - for example, failing to distinguish "no" in the query: "a plain blue shirt with no logos". Prior work has largely addressed this limitation through data-centric approaches, fine-tuning CLIP on large-scale synthetic negation datasets. However, these efforts are commonly evaluated using retrieval-based metrics that cannot reliably reflect whether negation is actually understood. In this paper, we identify two key limitations of such evaluation metrics and investigate an alternative evaluation framework based on Multimodal LLMs-as-a-judge, which typically excel at understanding simple yes/no questions about image content, providing a fair evaluation of negation understanding in CLIP models. We then ask whether there already exists a direction in the CLIP embedding space associated with negation. We find evidence that such a direction exists, and show that it can be manipulated through test-time intervention via representation engineering to steer CLIP toward negation-aware behavior without any fine-tuning. Finally, we test negation understanding on non-common image-text samples to evaluate generalization under distribution shifts.
Problem

Research questions and friction points this paper is trying to address.

negation understanding
vision-language models
CLIP
evaluation metrics
embedding space
Innovation

Methods, ideas, or system contributions that make the work stand out.

negation understanding
vision-language models
representation engineering
multimodal LLMs-as-a-judge
embedding space geometry
🔎 Similar Papers
No similar papers found.
F
Fawaz Sammani
ETRO Department, Vrije Universiteit Brussel, Belgium; imec, Kapeldreef 75, B-3001 Leuven, Belgium
T
Tzoulio Chamiti
ETRO Department, Vrije Universiteit Brussel, Belgium; imec, Kapeldreef 75, B-3001 Leuven, Belgium
Paul Gavrikov
Paul Gavrikov
Independent Researcher
computer visionsafetygeneralizationrobustness
Nikos Deligiannis
Nikos Deligiannis
Vrije Universiteit Brussel, imec
Signal ProcessingMachine LearningComputer VisionExplainable AI