SAM-I2V: Upgrading SAM to Support Promptable Video Segmentation with Less than 0.2% Training Cost

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address poor temporal mask consistency and high training costs in prompt-based video segmentation (PVS), this paper proposes SAM-Vid, a lightweight and efficient video extension framework. Methodologically: (1) an image-to-video feature upgrader extends the SAM image encoder into a spatiotemporal-aware module; (2) a frame-level memory filtering strategy dynamically retains high-confidence object memories; and (3) a “memory-as-prompt” mechanism leverages historical mask memories as cross-frame prompting sources. Trained at only 0.2% of SAM-2’s cost, SAM-Vid achieves over 90% of SAM-2’s performance on mainstream PVS benchmarks. It significantly improves temporal consistency and inference efficiency while enabling low-cost foundation model development for video segmentation—a novel paradigm for scalable, resource-efficient video understanding.

Technology Category

Application Category

📝 Abstract
Foundation models like the Segment Anything Model (SAM) have significantly advanced promptable image segmentation in computer vision. However, extending these capabilities to videos presents substantial challenges, particularly in ensuring precise and temporally consistent mask propagation in dynamic scenes. SAM 2 attempts to address this by training a model on massive image and video data from scratch to learn complex spatiotemporal associations, resulting in huge training costs that hinder research and practical deployment. In this paper, we introduce SAM-I2V, an effective image-to-video upgradation method for cultivating a promptable video segmentation (PVS) model. Our approach strategically upgrades the pre-trained SAM to support PVS, significantly reducing training complexity and resource requirements. To achieve this, we introduce three key innovations: (i) an image-to-video feature extraction upgrader built upon SAM's static image encoder to enable spatiotemporal video perception, (ii) a memory filtering strategy that selects the most relevant past frames for more effective utilization of historical information, and (iii) a memory-as-prompt mechanism leveraging object memory to ensure temporally consistent mask propagation in dynamic scenes. Comprehensive experiments demonstrate that our method achieves over 90% of SAM 2's performance while using only 0.2% of its training cost. Our work presents a resource-efficient pathway to PVS, lowering barriers for further research in PVS model design and enabling broader applications and advancements in the field. Code and model are available at: https://github.com/showlab/SAM-I2V.
Problem

Research questions and friction points this paper is trying to address.

Extends SAM to video segmentation with minimal training cost
Ensures precise mask propagation in dynamic video scenes
Reduces training complexity for promptable video segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Image-to-video feature extraction upgrader
Memory filtering for relevant past frames
Memory-as-prompt for consistent mask propagation
Haiyang Mei
Haiyang Mei
National University of Singapore, Dalian University of Technology, ETH Zurich
Computer VisionNeuroinformatics
P
Pengyu Zhang
Show Lab, National University of Singapore
M
Mike Zheng Shou
Show Lab, National University of Singapore