π€ AI Summary
Current multimodal large language models (MLLMs) struggle to jointly achieve global semantic understanding and local temporal tracking in referring video object segmentation (RefVOS): they either rely on sparse keyframes for global reasoning or on dense consecutive frames for local tracking, often requiring external modules to compensate for inherent limitations. This work proposes, for the first time, a unified globalβlocal dual-path modeling paradigm that jointly captures long- and short-term spatiotemporal dependencies via sparse context frames and dense query frames. We introduce object-aware contrastive learning and a self-refining keyframe selection mechanism to overcome MLLM context window constraints. The model is end-to-end trained by jointly optimizing a multimodal large language model and a pre-trained VOS memory bank, integrated with a frame selection and propagation framework. Our method achieves new state-of-the-art performance on MeViS and Ref-Youtube-VOS, significantly improving both segmentation accuracy and robustness.
π Abstract
This paper proposes a novel framework utilizing multi-modal large language models (MLLMs) for referring video object segmentation (RefVOS). Previous MLLM-based methods commonly struggle with the dilemma between"Ref"and"VOS": they either specialize in understanding a few key frames (global reasoning) or tracking objects on continuous frames (local reasoning), and rely on external VOS or frame selectors to mitigate the other end of the challenge. However, our framework GLUS shows that global and local consistency can be unified into a single video segmentation MLLM: a set of sparse"context frames"provides global information, while a stream of continuous"query frames"conducts local object tracking. This is further supported by jointly training the MLLM with a pre-trained VOS memory bank to simultaneously digest short-range and long-range temporal information. To improve the information efficiency within the limited context window of MLLMs, we introduce object contrastive learning to distinguish hard false-positive objects and a self-refined framework to identify crucial frames and perform propagation. By collectively integrating these insights, our GLUS delivers a simple yet effective baseline, achieving new state-of-the-art for MLLMs on the MeViS and Ref-Youtube-VOS benchmark. Our project page is at https://glus-video.github.io/.