🤖 AI Summary
Traditional video segmentation methods are constrained by closed-vocabulary settings, struggling with unseen objects and implicit temporal queries—e.g., surgical instruments dynamically appearing/disappearing across procedural stages. Existing referring segmentation (RS) approaches assume target relevance persists throughout the entire video, contradicting real-world temporal dynamics. To address this, we propose **Temporal-Constrained Video Referring Segmentation (TC-VideoRS)**, a novel task introducing explicit temporal constraints that enable models to adaptively infer spatiotemporal target relevance from natural language time cues. We design a prompt-based multimodal architecture jointly modeling cross-modal semantics and spatiotemporal consistency. Furthermore, we develop an automated benchmark construction pipeline and release TCVideoRSBenchmark—the first benchmark for dynamic scenarios, comprising 52 temporally rich samples. Experiments demonstrate our method’s effectiveness and robustness in open-vocabulary, time-sensitive video segmentation.
📝 Abstract
Conventional approaches to video segmentation are confined to predefined object categories and cannot identify out-of-vocabulary objects, let alone objects that are not identified explicitly but only referred to implicitly in complex text queries. This shortcoming limits the utility for video segmentation in complex and variable scenarios, where a closed set of object categories is difficult to define and where users may not know the exact object category that will appear in the video. Such scenarios can arise in operating room video analysis, where different health systems may use different workflows and instrumentation, requiring flexible solutions for video analysis. Reasoning segmentation (RS) now offers promise towards such a solution, enabling natural language text queries as interaction for identifying object to segment. However, existing video RS formulation assume that target objects remain contextually relevant throughout entire video sequences. This assumption is inadequate for real-world scenarios in which objects of interest appear, disappear or change relevance dynamically based on temporal context, such as surgical instruments that become relevant only during specific procedural phases or anatomical structures that gain importance at particular moments during surgery. Our first contribution is the introduction of temporally-constrained video reasoning segmentation, a novel task formulation that requires models to implicitly infer when target objects become contextually relevant based on text queries that incorporate temporal reasoning. Since manual annotation of temporally-constrained video RS datasets would be expensive and limit scalability, our second contribution is an innovative automated benchmark construction method. Finally, we present TCVideoRSBenchmark, a temporally-constrained video RS dataset containing 52 samples using the videos from the MVOR dataset.