Bridge the Gap Between Visual and Linguistic Comprehension for Generalized Zero-shot Semantic Segmentation

📅 2025-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the pixel-level generalization challenge for unseen classes in generalized zero-shot semantic segmentation (GZS3). We propose a fine-grained vision-language disentanglement framework that decomposes objects into two interpretable dimensions—“parts” and “states”—departing from conventional single-vector semantic embedding. Specifically, we introduce part–spatial matching (SPMatch) to achieve part-level spatial alignment and state–channel matching (CSMatch) to model state–channel semantic correlations. By integrating graph matching, spatial attention, and channel-wise alignment mechanisms, our approach enables cross-modal fine-grained alignment and effective zero-shot knowledge transfer. Extensive experiments on PASCAL VOC, COCO-Stuff, and CATARACTS benchmarks demonstrate significant improvements in unseen-class segmentation accuracy and generalization robustness. Our method establishes a novel, interpretable, and scalable paradigm for GZS3.

Technology Category

Application Category

📝 Abstract
Generalized zero-shot semantic segmentation (GZS3) aims to achieve the human-level capability of segmenting not only seen classes but also novel class regions unseen in the training data through introducing the bridge of semantic representations, e.g., word vector. While effective, the way of utilizing one semantic representation to associate the corresponding class and to enable the knowledge transfer from seen to unseen classes is insufficient as well as incompatible with human cognition. Inspired by the observation that humans often use some `part' and `state' information to comprehend the seen objects and imagine unseen classes, we decouple each class into detailed descriptions, including object parts and states. Based on the decoupling formulation, we propose a Decoupled Vision-Language Matching (DeVLMatch) framework, composed of spatial-part (SPMatch) and channel-state (CSMatch) matching modules, for GZS3. In SPMatch, we comprehend objects with spatial part information from both visual and linguistic perspectives and perform graph matching to bridge the gap. In CSMatch, states of objects from the linguistic perspective are matched to compatible channel information from the visual perspective. By decoupling and matching objects across visual and linguistic comprehension, we can explicitly introspect the relationship between seen and unseen classes in fine-grained object part and state levels, thereby facilitating the knowledge transfer from seen to unseen classes in visual space. The proposed DeVLMatch framework surpasses the previous GZS3 methods on standard benchmarks, including PASCAL VOC, COCO-Stuff, and CATARACTS, demonstrating its effectiveness.
Problem

Research questions and friction points this paper is trying to address.

Bridges visual and linguistic comprehension for semantic segmentation
Decouples classes into parts and states for better knowledge transfer
Improves zero-shot segmentation of seen and unseen object classes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decouples classes into parts and states
Uses spatial-part and channel-state matching
Bridges visual and linguistic comprehension gaps
🔎 Similar Papers
2024-06-23International Conference on Learning RepresentationsCitations: 3