🤖 AI Summary
CLIP’s pretraining prioritizes global semantic alignment, leading to noisy and inconsistent fine-grained local region predictions in open-vocabulary semantic segmentation. To address this, we propose a training-free, structure-aware feature correction method. First, we construct a Region Adjacency Graph (RAG) grounded in low-level features—such as color and texture—to explicitly encode image-structure priors. Then, leveraging this graph topology, we locally refine and correct CLIP’s high-level semantic features, mitigating the dispersion bias induced by contrastive learning. Our approach significantly suppresses segmentation noise and improves regional consistency, achieving state-of-the-art performance across multiple open-vocabulary segmentation benchmarks. The core contribution lies in a parameter-free graph-based modeling framework that bridges low-level structural priors with high-level semantic representations, thereby enhancing CLIP’s local discriminability without architectural or training modifications.
📝 Abstract
Benefiting from the inductive biases learned from large-scale datasets, open-vocabulary semantic segmentation (OVSS) leverages the power of vision-language models, such as CLIP, to achieve remarkable progress without requiring task-specific training. However, due to CLIP's pre-training nature on image-text pairs, it tends to focus on global semantic alignment, resulting in suboptimal performance when associating fine-grained visual regions with text. This leads to noisy and inconsistent predictions, particularly in local areas. We attribute this to a dispersed bias stemming from its contrastive training paradigm, which is difficult to alleviate using CLIP features alone. To address this, we propose a structure-aware feature rectification approach that incorporates instance-specific priors derived directly from the image. Specifically, we construct a region adjacency graph (RAG) based on low-level features (e.g., colour and texture) to capture local structural relationships and use it to refine CLIP features by enhancing local discrimination. Extensive experiments show that our method effectively suppresses segmentation noise, improves region-level consistency, and achieves strong performance on multiple open-vocabulary segmentation benchmarks.