OVS-DINO: Open-Vocabulary Segmentation via Structure-Aligned SAM-DINO with Language Guidance

📅 2026-04-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge in open-vocabulary segmentation where strong semantic generalization is often accompanied by insufficient boundary precision. The authors observe that boundary-aware information in DINO features diminishes with network depth and propose a structure alignment mechanism that leverages SAM’s structural priors to reactivate suppressed boundary-sensitive features within DINO. Their approach integrates a Structure-Aware Encoder (SAE), a Structure-Modulated Decoder (SMD), pseudo-masks generated by SAM for supervision, and a language-guided framework for open-vocabulary segmentation. Evaluated on multiple weakly supervised open-vocabulary segmentation benchmarks, the method achieves state-of-the-art performance, yielding an average improvement of 2.1% and a notable gain of 6.3% on the complex Cityscapes dataset.
📝 Abstract
Open-Vocabulary Segmentation (OVS) aims to segment image regions beyond predefined category sets by leveraging semantic descriptions. While CLIP based approaches excel in semantic generalization, they frequently lack the fine-grained spatial awareness required for dense prediction. Recent efforts have incorporated Vision Foundation Models (VFMs) like DINO to alleviate these limitations. However, these methods still struggle with the precise edge perception necessary for high fidelity segmentation. In this paper, we analyze internal representations of DINO and discover that its inherent boundary awareness is not absent but rather undergoes progressive attenuation as features transition into deeper transformer blocks. To address this, we propose OVS-DINO, a novel framework that revitalizes latent edge-sensitivity of DINO through structural alignment with the Segment Anything Model (SAM). Specifically, we introduce a Structure-Aware Encoder (SAE) and a Structure-Modulated Decoder (SMD) to effectively activate boundary features of DINO using SAM's structural priors, complemented by a supervision strategy utilizing SAM generated pseudo-masks. Extensive experiments demonstrate that our method achieves state-of-the-art performance across multiple weakly-supervised OVS benchmarks, improving the average score by 2.1% (from 44.8% to 46.9%). Notably, our approach significantly enhances segmentation accuracy in complex, cluttered scenarios, with a gain of 6.3% on Cityscapes (from 36.6% to 42.9%).
Problem

Research questions and friction points this paper is trying to address.

Open-Vocabulary Segmentation
Boundary Perception
Vision Foundation Models
Semantic Generalization
Dense Prediction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Open-Vocabulary Segmentation
Structure Alignment
Vision Foundation Models
Boundary Awareness
SAM-DINO Integration
🔎 Similar Papers