🤖 AI Summary
This work addresses open-vocabulary image segmentation—i.e., zero-shot pixel-level mask generation conditioned on arbitrary text prompts (class names or natural language descriptions). Methodologically, we propose a lightweight adapter framework that freezes pre-trained SAM2 and a vision-language model (VLM), and introduces three key components: position-disambiguated embeddings, cross-modal attention, and multimodal embedding alignment—enabling a unified interface for both class-level and sentence-level prompts. We further design an instance-aware enhancement module and an efficient fine-tuning strategy, optimizing only 4.5 million parameters. Our approach achieves state-of-the-art performance on open-vocabulary semantic, instance, and panoptic segmentation across ADE20K, PASCAL, and ScanNet benchmarks. It demonstrates strong generalization to unseen categories and high computational efficiency, bridging the gap between expressiveness and practicality in open-vocabulary segmentation.
📝 Abstract
The ability to segment objects based on open-ended language prompts remains a critical challenge, requiring models to ground textual semantics into precise spatial masks while handling diverse and unseen categories. We present OpenWorldSAM, a framework that extends the prompt-driven Segment Anything Model v2 (SAM2) to open-vocabulary scenarios by integrating multi-modal embeddings extracted from a lightweight vision-language model (VLM). Our approach is guided by four key principles: i) Unified prompting: OpenWorldSAM supports a diverse range of prompts, including category-level and sentence-level language descriptions, providing a flexible interface for various segmentation tasks. ii) Efficiency: By freezing the pre-trained components of SAM2 and the VLM, we train only 4.5 million parameters on the COCO-stuff dataset, achieving remarkable resource efficiency. iii) Instance Awareness: We enhance the model's spatial understanding through novel positional tie-breaker embeddings and cross-attention layers, enabling effective segmentation of multiple instances. iv) Generalization: OpenWorldSAM exhibits strong zero-shot capabilities, generalizing well on unseen categories and an open vocabulary of concepts without additional training. Extensive experiments demonstrate that OpenWorldSAM achieves state-of-the-art performance in open-vocabulary semantic, instance, and panoptic segmentation across multiple benchmarks, including ADE20k, PASCAL, ScanNet, and SUN-RGBD.