🤖 AI Summary
This work addresses the challenge of spatio-temporal video grounding in long, untrimmed videos—a setting largely overlooked by existing methods that are typically designed for short clips and suffer from high computational costs and interference from irrelevant frames due to full-frame synchronous processing. To this end, we present ART-STVG, the first systematic approach tailored for long-form video grounding, built upon an autoregressive Transformer architecture. ART-STVG processes video frames sequentially as a stream, leveraging a spatio-temporal memory bank to model contextual information across time. It further incorporates a memory selection strategy and a cascaded spatio-temporal decoder to enable efficient and precise grounding. Evaluated on a newly curated long-video dataset, our method significantly outperforms state-of-the-art approaches while maintaining competitive performance on conventional short-video benchmarks.
📝 Abstract
In real scenarios, videos can span several minutes or even hours. However, existing research on spatio-temporal video grounding (STVG), given a textual query, mainly focuses on localizing targets in short videos of tens of seconds, typically less than one minute, which limits real-world applications. In this paper, we explore Long-Form STVG (LF-STVG), which aims to locate targets in long-term videos. Compared with short videos, long-term videos contain much longer temporal spans and more irrelevant information, making it difficult for existing STVG methods that process all frames at once. To address this challenge, we propose an AutoRegressive Transformer architecture for LF-STVG, termed ART-STVG. Unlike conventional STVG methods that require the entire video sequence to make predictions at once, ART-STVG treats the video as streaming input and processes frames sequentially, enabling efficient handling of long videos. To model spatio-temporal context, we design spatial and temporal memory banks and apply them to the decoders. Since memories from different moments are not always relevant to the current frame, we introduce simple yet effective memory selection strategies to provide more relevant information to the decoders, significantly improving performance. Furthermore, instead of parallel spatial and temporal localization, we propose a cascaded spatio-temporal design that connects the spatial decoder to the temporal decoder, allowing fine-grained spatial cues to assist complex temporal localization in long videos. Experiments on newly extended LF-STVG datasets show that ART-STVG significantly outperforms state-of-the-art methods, while achieving competitive performance on conventional short-form STVG.