DCP-CLIP:A Coarse-to-Fine Framework for Open-Vocabulary Semantic Segmentation with Dual Interaction

📅 2026-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of insufficient cross-modal interaction and high computational cost in open-vocabulary semantic segmentation by proposing a coarse-to-fine dynamic interaction framework. Built upon CLIP, the method dynamically generates image-specific textual features and explicitly models bidirectional interactions between image spatial features and textual semantics. It first performs coarse segmentation guided by text prompts and then refines the prediction by integrating fine-grained spatial details from the encoder, while simultaneously leveraging the segmentation output to improve category prediction. The proposed approach achieves significant gains in both accuracy and efficiency over existing methods across multiple open-vocabulary segmentation benchmarks.

Technology Category

Application Category

📝 Abstract
The recent years have witnessed the remarkable development for open-vocabulary semantic segmentation (OVSS) using visual-language foundation models, yet still suffer from following fundamental challenges: (1) insufficient cross-modal communications between textual and visual spaces, and (2) significant computational costs from the interactions with massive number of categories. To address these issues, this paper describes a novel coarse-to-fine framework, called DCP-CLIP, for OVSS. Unlike prior efforts that mainly relied on pre-established category content and the inherent spatial-class interaction capability of CLIP, we dynamic constructing category-relevant textual features and explicitly models dual interactions between spatial image features and textual class semantics. Specifically, we first leverage CLIP's open-vocabulary recognition capability to identify semantic categories relevant to the image context, upon which we dynamically generate corresponding textual features to serve as initial textual guidance. Subsequently, we conduct a coarse segmentation by cross-modally integrating semantic information from textual guidance into the visual representations and achieve refined segmentation by integrating spatially enriched features from the encoder to recover fine-grained details and enhance spatial resolution. In final, we leverage spatial information from the segmentation side to refine category predictions for each mask, facilitating more precise semantic labeling. Experiments on multiple OVSS benchmarks demonstrate that DCP-CLIP outperforms existing methods by delivering both higher accuracy and greater efficiency.
Problem

Research questions and friction points this paper is trying to address.

open-vocabulary semantic segmentation
cross-modal interaction
computational cost
visual-language models
semantic segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

open-vocabulary semantic segmentation
coarse-to-fine framework
dual interaction
dynamic textual feature generation
cross-modal integration
🔎 Similar Papers
No similar papers found.
J
Jing Wang
National Engineering Research Center of Communications and Networking, Nanjing University of Posts & Telecommunications, Nanjing 210003, P. R. China
H
Huimin Shi
National Engineering Research Center of Communications and Networking, Nanjing University of Posts & Telecommunications, Nanjing 210003, P. R. China
Q
Quan Zhou
National Engineering Research Center of Communications and Networking, Nanjing University of Posts & Telecommunications, Nanjing 210003, P. R. China; and Institute for Advanced Ocean Research (Nantong), Southeast University, Nantong 226334, P. R. China
Q
Qibo Liu
National Engineering Research Center of Communications and Networking, Nanjing University of Posts & Telecommunications, Nanjing 210003, P. R. China
S
Suofei Zhang
Department of Internet of Things, Nanjing University of Posts & Telecommunications, Nanjing 210003, P. R. China
Huimin Lu
Huimin Lu
National University of Defense Technology
Robot VisionMulti-robot CoordinationRobot SoccerRobot Rescue