🤖 AI Summary
Mobile object segmentation (MOS) from a single image—without temporal cues—is an underexplored yet critical challenge, especially for real-time applications like autonomous driving.
Method: We propose the first purely single-frame MOS framework. It leverages multimodal large language models (MLLMs) with chain-of-thought (CoT) reasoning to generate semantic prompts, jointly harnessing SAM and vision-language models (VLMs) for cross-modal feature alignment and logic-guided segmentation. An iterative reasoning refinement loop further enhances scene understanding and segmentation accuracy.
Contributions/Results: (1) We formally define and solve the video-free MOS task for the first time; (2) we establish an interpretable, end-to-end single-image MOS paradigm; (3) our method achieves 92.5% J&F on public MOS benchmarks—significantly outperforming prior single-frame approaches—and demonstrates robust performance in real-world autonomous driving scenarios, matching or exceeding multi-frame methods.
📝 Abstract
Moving object segmentation plays a vital role in understanding dynamic visual environments. While existing methods rely on multi-frame image sequences to identify moving objects, single-image MOS is critical for applications like motion intention prediction and handling camera frame drops. However, segmenting moving objects from a single image remains challenging for existing methods due to the absence of temporal cues. To address this gap, we propose MovSAM, the first framework for single-image moving object segmentation. MovSAM leverages a Multimodal Large Language Model (MLLM) enhanced with Chain-of-Thought (CoT) prompting to search the moving object and generate text prompts based on deep thinking for segmentation. These prompts are cross-fused with visual features from the Segment Anything Model (SAM) and a Vision-Language Model (VLM), enabling logic-driven moving object segmentation. The segmentation results then undergo a deep thinking refinement loop, allowing MovSAM to iteratively improve its understanding of the scene context and inter-object relationships with logical reasoning. This innovative approach enables MovSAM to segment moving objects in single images by considering scene understanding. We implement MovSAM in the real world to validate its practical application and effectiveness for autonomous driving scenarios where the multi-frame methods fail. Furthermore, despite the inherent advantage of multi-frame methods in utilizing temporal information, MovSAM achieves state-of-the-art performance across public MOS benchmarks, reaching 92.5% on J&F. Our implementation will be available at https://github.com/IRMVLab/MovSAM.