VideoMolmo: Spatio-Temporal Grounding Meets Pointing

📅 2025-06-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current video tracking methods lack the sophisticated reasoning capabilities of large language models (LLMs), hindering text-driven fine-grained spatiotemporal localization. To address this, we propose VPoS: a decoupled multimodal framework comprising two stages—first, an LLM (built upon Molmo) parses textual queries and generates frame-wise pointing coordinates; second, a SAM2-based bidirectional point propagation module, integrated with a temporal self-attention mask fusion mechanism, achieves precise and temporally coherent spatiotemporal segmentation. Key contributions include: (1) a novel two-stage decoupled architecture; (2) the first large-scale video pointing dataset (72K video–text pairs, 100K annotated pointing points); (3) VPoS-Bench, a cross-domain benchmark covering five real-world scenarios; and (4) state-of-the-art performance on VPoS-Bench, Refer-VOS, and Reasoning VOS, demonstrating superior spatiotemporal localization accuracy and generalization. Code and models are publicly available.

Technology Category

Application Category

📝 Abstract
Spatio-temporal localization is vital for precise interactions across diverse domains, from biological research to autonomous navigation and interactive interfaces. Current video-based approaches, while proficient in tracking, lack the sophisticated reasoning capabilities of large language models, limiting their contextual understanding and generalization. We introduce VideoMolmo, a large multimodal model tailored for fine-grained spatio-temporal pointing conditioned on textual descriptions. Building upon the Molmo architecture, VideoMolmo incorporates a temporal module utilizing an attention mechanism to condition each frame on preceding frames, ensuring temporal consistency. Additionally, our novel temporal mask fusion pipeline employs SAM2 for bidirectional point propagation, significantly enhancing coherence across video sequences. This two-step decomposition, i.e., first using the LLM to generate precise pointing coordinates, then relying on a sequential mask-fusion module to produce coherent segmentation, not only simplifies the task for the language model but also enhances interpretability. Due to the lack of suitable datasets, we curate a comprehensive dataset comprising 72k video-caption pairs annotated with 100k object points. To evaluate the generalization of VideoMolmo, we introduce VPoS-Bench, a challenging out-of-distribution benchmark spanning five real-world scenarios: Cell Tracking, Egocentric Vision, Autonomous Driving, Video-GUI Interaction, and Robotics. We also evaluate our model on Referring Video Object Segmentation (Refer-VOS) and Reasoning VOS tasks. In comparison to existing models, VideoMolmo substantially improves spatio-temporal pointing accuracy and reasoning capability. Our code and models are publicly available at https://github.com/mbzuai-oryx/VideoMolmo.
Problem

Research questions and friction points this paper is trying to address.

Enhancing spatio-temporal localization with multimodal reasoning
Improving video object pointing via temporal consistency
Addressing lack of datasets for fine-grained video grounding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Large multimodal model for spatio-temporal pointing
Temporal module with attention for frame consistency
Bidirectional point propagation with SAM2 fusion
🔎 Similar Papers
No similar papers found.