SAMWISE: Infusing wisdom in SAM2 for Text-Driven Video Segmentation

📅 2024-11-26
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Existing Referring Video Object Segmentation (RVOS) methods struggle to simultaneously achieve global temporal modeling and streaming online inference: clip-based approaches lose long-range context, while full-video processing precludes real-time streaming applications. This paper proposes the first lightweight, streaming-capable, text-driven video segmentation framework that operates with a frozen SAM2 backbone—requiring no weight fine-tuning. Our contributions are threefold: (1) a novel, fine-tuning-free multimodal adapter that explicitly fuses language prompts with temporal cues; (2) identification of tracking drift in SAM2 and introduction of a learnable focal adjustment mechanism to correct it; and (3) joint modeling of text encoding, inter-frame feature alignment, and lightweight temporal attention. Evaluated on multiple RVOS benchmarks, our method achieves state-of-the-art performance with only +4.2M parameters and supports real-time streaming inference. The code is publicly available.

Technology Category

Application Category

📝 Abstract
Referring Video Object Segmentation (RVOS) relies on natural language expressions to segment an object in a video clip. Existing methods restrict reasoning either to independent short clips, losing global context, or process the entire video offline, impairing their application in a streaming fashion. In this work, we aim to surpass these limitations and design an RVOS method capable of effectively operating in streaming-like scenarios while retaining contextual information from past frames. We build upon the Segment-Anything 2 (SAM2) model, that provides robust segmentation and tracking capabilities and is naturally suited for streaming processing. We make SAM2 wiser, by empowering it with natural language understanding and explicit temporal modeling at the feature extraction stage, without fine-tuning its weights, and without outsourcing modality interaction to external models. To this end, we introduce a novel adapter module that injects temporal information and multi-modal cues in the feature extraction process. We further reveal the phenomenon of tracking bias in SAM2 and propose a learnable module to adjust its tracking focus when the current frame features suggest a new object more aligned with the caption. Our proposed method, SAMWISE, achieves state-of-the-art across various benchmarks, by adding a negligible overhead of just 4.2 M parameters. The code is available at https://github.com/ClaudiaCuttano/SAMWISE
Problem

Research questions and friction points this paper is trying to address.

Enables streaming video segmentation with language cues
Enhances SAM2 with temporal and multi-modal understanding
Addresses tracking bias in SAM2 for accurate segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhances SAM2 with natural language understanding
Introduces adapter for temporal and multi-modal features
Proposes learnable module to adjust tracking bias
🔎 Similar Papers
No similar papers found.