🤖 AI Summary
To address temporal inconsistency, error accumulation, and abrupt occlusion changes in online video instance segmentation, this paper proposes the Local2Global (L2G) framework. The method extends the DETR architecture with a ResNet-50 backbone and adopts an end-to-end online training paradigm. Its core innovation lies in a dual-path query mechanism—local and global—and a lightweight L2G-aligner Transformer decoder that enables frame-to-frame feature alignment in real time, without heuristic rules or explicit memory modules. By jointly modeling short-term local dynamics and long-term global consistency, L2G achieves robust temporal coherence while maintaining computational efficiency. Evaluated on Youtube-VIS-19, Youtube-VIS-21, and OVIS, L2G attains 54.3, 49.4, and 37.0 AP, respectively—outperforming all existing online methods and establishing new state-of-the-art performance.
📝 Abstract
Online video segmentation methods excel at handling long sequences and capturing gradual changes, making them ideal for real-world applications. However, achieving temporally consistent predictions remains a challenge, especially with gradual accumulation of noise or drift in on-line propagation, abrupt occlusions and scene transitions. This paper introduces Local2Global, an online framework, for video instance segmentation, exhibiting state-of-the-art performance with simple baseline and training purely in online fashion. Leveraging the DETR-based query propagation framework, we introduce two novel sets of queries:(1) local queries that capture initial object-specific spatial features from each frame and (2) global queries containing past spatio-temporal representations. We propose the L2G-aligner, a novel lightweight transformer decoder, to facilitate an early alignment between local and global queries. This alignment allows our model to effectively utilize current frame information while maintaining temporal consistency, producing a smooth transition between frames. Furthermore, L2G-aligner is integrated within the segmentation model, without relying on additional complex heuristics, or memory mechanisms. Extensive experiments across various challenging VIS and VPS datasets showcase the superiority of our method with simple online training, surpassing current benchmarks without bells and rings. For instance, we achieve 54.3 and 49.4 AP on Youtube-VIS-19/-21 datasets and 37.0 AP on OVIS dataset respectively withthe ResNet-50 backbone.