OpenWorldSAM: Extending SAM2 for Universal Image Segmentation with Language Prompts

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses open-vocabulary image segmentation—i.e., zero-shot pixel-level mask generation conditioned on arbitrary text prompts (class names or natural language descriptions). Methodologically, we propose a lightweight adapter framework that freezes pre-trained SAM2 and a vision-language model (VLM), and introduces three key components: position-disambiguated embeddings, cross-modal attention, and multimodal embedding alignment—enabling a unified interface for both class-level and sentence-level prompts. We further design an instance-aware enhancement module and an efficient fine-tuning strategy, optimizing only 4.5 million parameters. Our approach achieves state-of-the-art performance on open-vocabulary semantic, instance, and panoptic segmentation across ADE20K, PASCAL, and ScanNet benchmarks. It demonstrates strong generalization to unseen categories and high computational efficiency, bridging the gap between expressiveness and practicality in open-vocabulary segmentation.

Technology Category

Application Category

📝 Abstract
The ability to segment objects based on open-ended language prompts remains a critical challenge, requiring models to ground textual semantics into precise spatial masks while handling diverse and unseen categories. We present OpenWorldSAM, a framework that extends the prompt-driven Segment Anything Model v2 (SAM2) to open-vocabulary scenarios by integrating multi-modal embeddings extracted from a lightweight vision-language model (VLM). Our approach is guided by four key principles: i) Unified prompting: OpenWorldSAM supports a diverse range of prompts, including category-level and sentence-level language descriptions, providing a flexible interface for various segmentation tasks. ii) Efficiency: By freezing the pre-trained components of SAM2 and the VLM, we train only 4.5 million parameters on the COCO-stuff dataset, achieving remarkable resource efficiency. iii) Instance Awareness: We enhance the model's spatial understanding through novel positional tie-breaker embeddings and cross-attention layers, enabling effective segmentation of multiple instances. iv) Generalization: OpenWorldSAM exhibits strong zero-shot capabilities, generalizing well on unseen categories and an open vocabulary of concepts without additional training. Extensive experiments demonstrate that OpenWorldSAM achieves state-of-the-art performance in open-vocabulary semantic, instance, and panoptic segmentation across multiple benchmarks, including ADE20k, PASCAL, ScanNet, and SUN-RGBD.
Problem

Research questions and friction points this paper is trying to address.

Segment objects using open-ended language prompts
Extend SAM2 for open-vocabulary image segmentation
Achieve zero-shot generalization on unseen categories
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends SAM2 with multi-modal vision-language embeddings
Trains only 4.5M parameters for resource efficiency
Enhances spatial understanding via positional embeddings
🔎 Similar Papers
No similar papers found.
S
Shiting Xiao
Yale University
Rishabh Kabra
Rishabh Kabra
Google DeepMind, University College London
Unsupervised learningCausalityComputer VisionReinforcement Learning
Yuhang Li
Yuhang Li
Yale University
Machine Learning
D
Donghyun Lee
Yale University
J
Joao Carreira
Google DeepMind
P
Priyadarshini Panda
Yale University