🤖 AI Summary
This work addresses the challenge that vision–language models such as CLIP struggle to interpret negation in textual descriptions (e.g., “no logo”). It reveals, for the first time, the existence of a geometric direction in CLIP’s embedding space that corresponds to negation semantics. Building on this insight, the authors propose a test-time intervention method that adjusts representations along this direction without requiring model fine-tuning, thereby enabling negation-aware inference. To reliably evaluate negation understanding, they introduce a novel benchmark framework that leverages multimodal large language models as judges. Experimental results demonstrate that the proposed approach significantly improves the model’s ability to comprehend negated semantics, particularly on out-of-distribution image–text pairs.
📝 Abstract
Joint Vision-Language Embedding models such as CLIP typically fail at understanding negation in text queries - for example, failing to distinguish "no" in the query: "a plain blue shirt with no logos". Prior work has largely addressed this limitation through data-centric approaches, fine-tuning CLIP on large-scale synthetic negation datasets. However, these efforts are commonly evaluated using retrieval-based metrics that cannot reliably reflect whether negation is actually understood. In this paper, we identify two key limitations of such evaluation metrics and investigate an alternative evaluation framework based on Multimodal LLMs-as-a-judge, which typically excel at understanding simple yes/no questions about image content, providing a fair evaluation of negation understanding in CLIP models. We then ask whether there already exists a direction in the CLIP embedding space associated with negation. We find evidence that such a direction exists, and show that it can be manipulated through test-time intervention via representation engineering to steer CLIP toward negation-aware behavior without any fine-tuning. Finally, we test negation understanding on non-common image-text samples to evaluate generalization under distribution shifts.