Show Me When and Where: Towards Referring Video Object Segmentation in the Wild

📅 2026-03-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing referring video object segmentation methods are limited to trimmed videos where the target object is always present, making them ill-suited for real-world scenarios where the target may be absent in certain frames. To address this challenge, this work introduces a new task formulation for untrimmed videos that requires models to jointly predict both the temporal presence and spatial location of the referred object. We also present YoURVOS, a large-scale dataset captured in realistic settings to support this task. To tackle the problem, we propose OMFormer (Object-level Multimodal Transformer), which leverages object-level cross-modal interactions to achieve global spatiotemporal localization. Experimental results demonstrate that existing approaches suffer significant performance degradation as the number of target-absent frames increases, whereas OMFormer exhibits robust performance on YoURVOS, establishing a reliable baseline for practical applications.

Technology Category

Application Category

📝 Abstract
Referring video object segmentation (RVOS) has recently generated great popularity in computer vision due to its widespread applications. Existing RVOS setting contains elaborately trimmed videos, with text-referred objects always appearing in all frames, which however fail to fully reflect the realistic challenges of this task. This simplified setting requires RVOS methods to only predict where objects, with no need to show when the objects appear. In this work, we introduce a new setting towards in-the-wild RVOS. To this end, we collect a new benchmark dataset using Youtube Untrimmed videos for RVOS - YoURVOS, which contains 1,120 in-the-wild videos with 7 times more duration and scenes than existing datasets. Our new benchmark challenges RVOS methods to show not only where but also when objects appear in videos. To set a baseline, we propose Object-level Multimodal TransFormers (OMFormer) to tackle the challenges, which are characterized by encoding object-level multimodal interactions for efficient and global spatial-temporal localisation. We demonstrate that previous VOS methods struggle on our YoURVOS benchmark, especially with the increase of target-absent frames, while our OMFormer consistently performs well. Our YoURVOS dataset offers an imperative benchmark, which will push forward the advancement of RVOS methods for practical applications.
Problem

Research questions and friction points this paper is trying to address.

Referring Video Object Segmentation
in-the-wild
temporal localization
spatial-temporal localization
untrimmed videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Referring Video Object Segmentation
In-the-wild Benchmark
Temporal Localization
Multimodal Transformer
Untrimmed Videos
🔎 Similar Papers
No similar papers found.
M
Mingqi Gao
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China, and also with Warwick Manufacturing Group, University of Warwick, Coventry CV1 7AL, U.K.
J
Jinyu Yang
Tapall.ai, Shenzhen 518055, China
J
Jingnan Luo
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
Xiantong Zhen
Xiantong Zhen
United Imaging
Medical Image AnalysisMachine LearningComputer Vision
Jungong Han
Jungong Han
Chair Professor in Computer Vision, University of Sheffield, UK, FIAPR, FAAIA
Computer VisionVideo AnalyticsMachine Learning
Giovanni Montana
Giovanni Montana
Professor of Data Science, University of Warwick
Data ScienceMachine LearningDigital Healthcare
Feng Zheng
Feng Zheng
Southern University of Science and Technology; Spatialtemporal AI
Embodied IntelligenceSpatialtemporal AIComputer Vision