π€ AI Summary
This work proposes a DINOv3-based framework tailored for open-vocabulary semantic segmentation to address the limitations of existing models in generalizing to unseen categories, achieving high spatial precision, and maintaining robustness in complex scenes. The approach employs a task-specific architecture that jointly optimizes global [CLS] tokens and local patch-level textβvision alignment, while integrating early visual representations with late-stage correlated features through a dual refinement mechanism. Furthermore, a sliding window strategy is introduced to enable high-resolution local-to-global reasoning. Evaluated on five mainstream open-vocabulary semantic segmentation benchmarks, the method significantly outperforms current state-of-the-art approaches, demonstrating superior accuracy and robustness in challenging scenarios.
π Abstract
Open-Vocabulary Semantic Segmentation (OVSS) assigns pixel-level labels from an open set of text-defined categories, demanding reliable generalization to unseen classes at inference. Although modern vision-language models (VLMs) support strong open-vocabulary recognition, their representations learned through global contrastive objectives remain suboptimal for dense prediction, prompting many OVSS methods to depend on limited adaptation or refinement of image-text similarity maps. This, in turn, restricts spatial precision and robustness in complex, cluttered scenes. We introduce dinov3.seg, extending dinov3.txt into a dedicated framework for OVSS. Our contributions are four-fold. First, we design a task-specific architecture tailored to this backbone, systematically adapting established design principles from prior open-vocabulary segmentation work. Second, we jointly leverage text embeddings aligned with both the global [CLS] token and local patch-level visual features from ViT-based encoder, effectively combining semantic discrimination with fine-grained spatial locality. Third, unlike prior approaches that rely primarily on post hoc similarity refinement, we perform early refinement of visual representations prior to image-text interaction, followed by late refinement of the resulting image-text correlation features, enabling more accurate and robust dense predictions in cluttered scenes. Finally, we propose a high-resolution local-global inference strategy based on sliding-window aggregation, which preserves spatial detail while maintaining global context. We conduct extensive experiments on five widely adopted OVSS benchmarks to evaluate our approach. The results demonstrate its effectiveness and robustness, consistently outperforming current state-of-the-art methods.