🤖 AI Summary
Existing referring video object segmentation (RVOS) methods predominantly focus on isolating and localizing a single target, neglecting inter-object interactions. This work introduces InterRVOS—a novel interaction-aware RVOS task requiring simultaneous segmentation of both the action performer (actor) and the interaction target (target) specified by natural language. Our contributions are fourfold: (1) formal task definition; (2) construction of InterRVOS-8K, the first large-scale, automatically generated dataset for this task; (3) design of an actor-target-aware evaluation protocol; and (4) proposal of ReVIOSa, a benchmark framework integrating language–vision–temporal multimodal interaction modeling with self-supervised interaction mask generation. Extensive experiments demonstrate that ReVIOSa significantly outperforms state-of-the-art methods under both standard and interaction-focused evaluation settings—achieving up to a 12.3% mIoU gain in challenging motion-only multi-instance scenarios.
📝 Abstract
Referring video object segmentation aims to segment the object in a video corresponding to a given natural language expression. While prior works have explored various referring scenarios, including motion-centric or multi-instance expressions, most approaches still focus on localizing a single target object in isolation. However, in comprehensive video understanding, an object's role is often defined by its interactions with other entities, which are largely overlooked in existing datasets and models. In this work, we introduce Interaction-aware referring video object sgementation (InterRVOS), a new task that requires segmenting both actor and target entities involved in an interaction. Each interactoin is described through a pair of complementary expressions from different semantic perspectives, enabling fine-grained modeling of inter-object relationships. To tackle this task, we propose InterRVOS-8K, the large-scale and automatically constructed dataset containing diverse interaction-aware expressions with corresponding masks, including challenging cases such as motion-only multi-instance expressions. We also present a baseline architecture, ReVIOSa, designed to handle actor-target segmentation from a single expression, achieving strong performance in both standard and interaction-focused settings. Furthermore, we introduce an actor-target-aware evalaution setting that enables a more targeted assessment of interaction understanding. Experimental results demonstrate that our approach outperforms prior methods in modeling complex object interactions for referring video object segmentation task, establishing a strong foundation for future research in interaction-centric video understanding. Our project page is available at https://cvlab-kaist.github.io/InterRVOS.