Distilling Spectral Graph for Object-Context Aware Open-Vocabulary Semantic Segmentation

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Open-vocabulary semantic segmentation (OVSS) suffers from insufficient object-level contextual modeling, hindering semantic-consistent grouping and precise alignment with arbitrary text queries. To address this without additional training, we propose a spectral-graph-guided framework. Our method first distills spectral graph features—capturing intra-object structural consistency—from vision foundation models into the visual encoder, enhancing its capacity for object-aware representation learning. Second, it introduces a joint calibration mechanism between text embeddings and image-level object existence probabilities, enabling zero-shot query-object semantic alignment. The framework is plug-and-play, requires no fine-tuning, and exhibits strong generalization across unseen categories. Evaluated on multiple benchmarks, it achieves state-of-the-art performance, significantly improving both mask accuracy and semantic consistency for novel classes.

Technology Category

Application Category

📝 Abstract
Open-Vocabulary Semantic Segmentation (OVSS) has advanced with recent vision-language models (VLMs), enabling segmentation beyond predefined categories through various learning schemes. Notably, training-free methods offer scalable, easily deployable solutions for handling unseen data, a key goal of OVSS. Yet, a critical issue persists: lack of object-level context consideration when segmenting complex objects in the challenging environment of OVSS based on arbitrary query prompts. This oversight limits models' ability to group semantically consistent elements within object and map them precisely to user-defined arbitrary classes. In this work, we introduce a novel approach that overcomes this limitation by incorporating object-level contextual knowledge within images. Specifically, our model enhances intra-object consistency by distilling spectral-driven features from vision foundation models into the attention mechanism of the visual encoder, enabling semantically coherent components to form a single object mask. Additionally, we refine the text embeddings with zero-shot object presence likelihood to ensure accurate alignment with the specific objects represented in the images. By leveraging object-level contextual knowledge, our proposed approach achieves state-of-the-art performance with strong generalizability across diverse datasets.
Problem

Research questions and friction points this paper is trying to address.

Lack of object-level context in OVSS segmentation.
Difficulty in grouping semantically consistent elements.
Challenges in mapping elements to user-defined classes.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distills spectral-driven features for object consistency
Refines text embeddings with zero-shot object likelihood
Incorporates object-level context in segmentation models
🔎 Similar Papers
No similar papers found.