🤖 AI Summary
This work addresses the limitations of insufficient cross-modal interaction and high computational cost in open-vocabulary semantic segmentation by proposing a coarse-to-fine dynamic interaction framework. Built upon CLIP, the method dynamically generates image-specific textual features and explicitly models bidirectional interactions between image spatial features and textual semantics. It first performs coarse segmentation guided by text prompts and then refines the prediction by integrating fine-grained spatial details from the encoder, while simultaneously leveraging the segmentation output to improve category prediction. The proposed approach achieves significant gains in both accuracy and efficiency over existing methods across multiple open-vocabulary segmentation benchmarks.
📝 Abstract
The recent years have witnessed the remarkable development for open-vocabulary semantic segmentation (OVSS) using visual-language foundation models, yet still suffer from following fundamental challenges: (1) insufficient cross-modal communications between textual and visual spaces, and (2) significant computational costs from the interactions with massive number of categories. To address these issues, this paper describes a novel coarse-to-fine framework, called DCP-CLIP, for OVSS. Unlike prior efforts that mainly relied on pre-established category content and the inherent spatial-class interaction capability of CLIP, we dynamic constructing category-relevant textual features and explicitly models dual interactions between spatial image features and textual class semantics. Specifically, we first leverage CLIP's open-vocabulary recognition capability to identify semantic categories relevant to the image context, upon which we dynamically generate corresponding textual features to serve as initial textual guidance. Subsequently, we conduct a coarse segmentation by cross-modally integrating semantic information from textual guidance into the visual representations and achieve refined segmentation by integrating spatially enriched features from the encoder to recover fine-grained details and enhance spatial resolution. In final, we leverage spatial information from the segmentation side to refine category predictions for each mask, facilitating more precise semantic labeling. Experiments on multiple OVSS benchmarks demonstrate that DCP-CLIP outperforms existing methods by delivering both higher accuracy and greater efficiency.