🤖 AI Summary
Existing training-free open-vocabulary semantic segmentation methods suffer from three key limitations: (1) high computational cost in affinity refinement, (2) poor local smoothness due to fixed Gaussian kernels for attention map fusion, and (3) isotropic neighborhood constraints that hinder structural adaptability. This paper proposes a novel training-free framework addressing these issues. Its core innovations are: (1) leveraging self-attention from Stable Diffusion models to construct data-driven, anisotropic neighborhood graphs; (2) introducing an entropy-weighted mechanism to dynamically select salient attention maps; and (3) replacing Gaussian kernels with non-parametric random walks for adaptive spatial diffusion. The method eliminates the need for post-processing (e.g., CRF or PAMR) and supports segmentation of arbitrarily shaped objects. It achieves state-of-the-art zero-shot segmentation performance across seven major benchmarks, significantly outperforming existing training-free approaches—demonstrating both effectiveness and strong generalization capability.
📝 Abstract
Despite recent advances in Open-Vocabulary Semantic Segmentation (OVSS), existing training-free methods face several limitations: use of computationally expensive affinity refinement strategies, ineffective fusion of transformer attention maps due to equal weighting or reliance on fixed-size Gaussian kernels to reinforce local spatial smoothness, enforcing isotropic neighborhoods. We propose a strong baseline for training-free OVSS termed as NERVE (Neighbourhood &Entropy-guided Random-walk for open-Vocabulary sEgmentation), which uniquely integrates global and fine-grained local information, exploiting the neighbourhood structure from the self-attention layer of a stable diffusion model. We also introduce a stochastic random walk for refining the affinity rather than relying on fixed-size Gaussian kernels for local context. This spatial diffusion process encourages propagation across connected and semantically related areas, enabling it to effectively delineate objects with arbitrary shapes. Whereas most existing approaches treat self-attention maps from different transformer heads or layers equally, our method uses entropy-based uncertainty to select the most relevant maps. Notably, our method does not require any conventional post-processing techniques like Conditional Random Fields (CRF) or Pixel-Adaptive Mask Refinement (PAMR). Experiments are performed on 7 popular semantic segmentation benchmarks, yielding an overall state-of-the-art zero-shot segmentation performance, providing an effective approach to open-vocabulary semantic segmentation.