🤖 AI Summary
Existing RVOS datasets are limited to short (few-second) clips with highly salient objects, failing to support realistic long-video scenarios. This work introduces Long-RVOS—the first large-scale benchmark for long-term reference video object segmentation—comprising over 2,000 videos with an average duration exceeding 60 seconds. It incorporates realistic challenges including occlusion, object disappearance and reappearance, and camera shot transitions, alongside fine-grained, multi-type natural language descriptions and a novel spatiotemporal consistency metric. We formally define the long-temporal RVOS task for the first time. To address it, we propose ReferMo, a baseline model integrating motion-aware modeling with local–global feature fusion to capture long-range dependencies. Under joint frame-level and temporal evaluation, ReferMo significantly outperforms six state-of-the-art methods, demonstrating an effective modeling paradigm for long-video segmentation.
📝 Abstract
Referring video object segmentation (RVOS) aims to identify, track and segment the objects in a video based on language descriptions, which has received great attention in recent years. However, existing datasets remain focus on short video clips within several seconds, with salient objects visible in most frames. To advance the task towards more practical scenarios, we introduce extbf{Long-RVOS}, a large-scale benchmark for long-term referring video object segmentation. Long-RVOS contains 2,000+ videos of an average duration exceeding 60 seconds, covering a variety of objects that undergo occlusion, disappearance-reappearance and shot changing. The objects are manually annotated with three different types of descriptions to individually evaluate the understanding of static attributes, motion patterns and spatiotemporal relationships. Moreover, unlike previous benchmarks that rely solely on the per-frame spatial evaluation, we introduce two new metrics to assess the temporal and spatiotemporal consistency. We benchmark 6 state-of-the-art methods on Long-RVOS. The results show that current approaches struggle severely with the long-video challenges. To address this, we further propose ReferMo, a promising baseline method that integrates motion information to expand the temporal receptive field, and employs a local-to-global architecture to capture both short-term dynamics and long-term dependencies. Despite simplicity, ReferMo achieves significant improvements over current methods in long-term scenarios. We hope that Long-RVOS and our baseline can drive future RVOS research towards tackling more realistic and long-form videos.