MeViS: A Multi-Modal Dataset for Referring Motion Expression Video Segmentation

📅 2025-08-19
🏛️ IEEE Transactions on Pattern Analysis and Machine Intelligence
📈 Citations: 22
Influential: 0
📄 PDF
🤖 AI Summary
Existing video segmentation datasets emphasize static attribute descriptions, neglecting the critical role of motion in video understanding. To address this, we introduce MeViS—the first multimodal video segmentation dataset explicitly guided by motion expression—comprising 33K human-annotated text and audio motion descriptions across 2,006 complex scenes and 8,171 objects, supporting four tasks: Referring Video Object Segmentation (RVOS), Audio-Visual Object Segmentation (AVOS), Referring Multi-Object Tracking (RMOT), and Referring Motion Expression Grounding (RMEG). MeViS pioneers motion semantics as the core referential cue, breaking the static-dominant paradigm. We further propose LMPM++, a model integrating multimodal aligned annotation, motion-aware modeling, and joint audio-visual-linguistic representation, achieving new state-of-the-art performance on RVOS, AVOS, and RMOT. Comprehensive evaluation of 15 mainstream methods reveals systematic motion reasoning bottlenecks; leveraging MeViS significantly improves segmentation and tracking accuracy, advancing motion-centric video understanding.

Technology Category

Application Category

📝 Abstract
This paper proposes a large-scale multi-modal dataset for referring motion expression video segmentation, focusing on segmenting and tracking target objects in videos based on language description of objects’ motions. Existing referring video segmentation datasets often focus on salient objects and use language expressions rich in static attributes, potentially allowing the target object to be identified in a single frame. Such datasets underemphasize the role of motion in both videos and languages. To explore the feasibility of using motion expressions and motion reasoning clues for pixel-level video understanding, we introduce MeViS, a dataset containing 33,072 human-annotated motion expressions in both text and audio, covering 8,171 objects in 2,006 videos of complex scenarios. We benchmark 15 existing methods across 4 tasks supported by MeViS, including 6 referring video object segmentation (RVOS) methods, 3 audio-guided video object segmentation (AVOS) methods, 2 referring multi-object tracking (RMOT) methods, and 4 video captioning methods for the newly introduced referring motion expression generation (RMEG) task. The results demonstrate weaknesses and limitations of existing methods in addressing motion expression-guided video understanding. We further analyze the challenges and propose an approach LMPM++ for RVOS/AVOS/RMOT that achieves new state-of-the-art results. Our dataset provides a platform that facilitates the development of motion expression-guided video understanding algorithms in complex video scenes.
Problem

Research questions and friction points this paper is trying to address.

Dataset for video segmentation using motion expressions
Benchmarks existing methods for motion-guided video understanding
Proposes new approach for improved video object segmentation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces MeViS dataset with motion expressions
Benchmarks 15 methods across 4 video tasks
Proposes LMPM++ approach achieving state-of-the-art results
🔎 Similar Papers
No similar papers found.