🤖 AI Summary
Open-vocabulary semantic segmentation often suffers from hallucinations, missed detections, and misaligned visual-textual representations in complex scenes due to the absence of object priors and spatial constraints. To address these challenges, this work proposes LoGoSeg, a single-stage end-to-end framework that innovatively incorporates object existence priors to suppress hallucinations, introduces a region-aware alignment module to enhance localization accuracy, and employs a dual-stream fusion mechanism to effectively integrate local structural details with global semantic context. Built upon vision-language models such as CLIP, LoGoSeg operates without external masks or auxiliary models and achieves state-of-the-art performance across six benchmark datasets—A-847, PC-459, A-150, PC-59, PAS-20, and PAS-20b—demonstrating exceptional generalization capability.
📝 Abstract
Open-vocabulary semantic segmentation (OVSS) extends traditional closed-set segmentation by enabling pixel-wise annotation for both seen and unseen categories using arbitrary textual descriptions. While existing methods leverage vision-language models (VLMs) like CLIP, their reliance on image-level pretraining often results in imprecise spatial alignment, leading to mismatched segmentations in ambiguous or cluttered scenes. However, most existing approaches lack strong object priors and region-level constraints, which can lead to object hallucination or missed detections, further degrading performance. To address these challenges, we propose LoGoSeg, an efficient single-stage framework that integrates three key innovations: (i) an object existence prior that dynamically weights relevant categories through global image-text similarity, effectively reducing hallucinations; (ii) a region-aware alignment module that establishes precise region-level visual-textual correspondences; and (iii) a dual-stream fusion mechanism that optimally combines local structural information with global semantic context. Unlike prior works, LoGoSeg eliminates the need for external mask proposals, additional backbones, or extra datasets, ensuring efficiency. Extensive experiments on six benchmarks (A-847, PC-459, A-150, PC-59, PAS-20, and PAS-20b) demonstrate its competitive performance and strong generalization in open-vocabulary settings.