CoPatch: Zero-Shot Referring Image Segmentation by Leveraging Untapped Spatial Knowledge in CLIP

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing vision-language models (e.g., CLIP) excel at image-text alignment but exhibit limited capability in modeling spatial relationships: textual representations neglect context words containing spatial cues, while intermediate visual features lack spatial-structure sensitivity—hindering zero-shot referring expression segmentation. To address this, we propose CoPatch, a zero-shot framework comprising three key components: (1) context-enhanced hybrid text features that explicitly encode spatial modifiers; (2) spatially faithful image patch features extracted from CLIP’s intermediate layers; and (3) a context-aware image-text similarity map (CoMap), integrated with clustering for mask generation. CoPatch requires no fine-tuning. Evaluated on RefCOCO, RefCOCO+, RefCOCOg, and PhraseCut, it achieves consistent improvements of 2–7 mIoU over prior zero-shot methods, establishing new state-of-the-art performance in zero-shot referring image segmentation.

Technology Category

Application Category

📝 Abstract
Spatial grounding is crucial for referring image segmentation (RIS), where the goal of the task is to localize an object described by language. Current foundational vision-language models (VLMs), such as CLIP, excel at aligning images and text but struggle with understanding spatial relationships. Within the language stream, most existing methods often focus on the primary noun phrase when extracting local text features, undermining contextual tokens. Within the vision stream, CLIP generates similar features for images with different spatial layouts, resulting in limited sensitivity to spatial structure. To address these limitations, we propose extsc{CoPatch}, a zero-shot RIS framework that leverages internal model components to enhance spatial representations in both text and image modalities. For language, extsc{CoPatch} constructs hybrid text features by incorporating context tokens carrying spatial cues. For vision, it extracts patch-level image features using our novel path discovered from intermediate layers, where spatial structure is better preserved. These enhanced features are fused into a clustered image-text similarity map, exttt{CoMap}, enabling precise mask selection. As a result, extsc{CoPatch} significantly improves spatial grounding in zero-shot RIS across RefCOCO, RefCOCO+, RefCOCOg, and PhraseCut (+ 2--7 mIoU) without requiring any additional training. Our findings underscore the importance of recovering and leveraging the untapped spatial knowledge inherently embedded in VLMs, thereby paving the way for opportunities in zero-shot RIS.
Problem

Research questions and friction points this paper is trying to address.

Enhancing spatial grounding in zero-shot referring image segmentation
Addressing CLIP's limited sensitivity to spatial relationships
Improving spatial representations in both text and image modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Constructs hybrid text features with spatial cues
Extracts patch-level image features from intermediate layers
Fuses enhanced features into clustered similarity map
🔎 Similar Papers
No similar papers found.