🤖 AI Summary
Existing video understanding methods primarily focus on coarse-grained action recognition or generic object tracking, failing to jointly perform fine-grained, action-driven multi-object detection, tracking, and temporal localization. To address this gap, we propose a novel task—Spatio-Temporal Video Action Grounding (SVAG)—which requires models to simultaneously detect, track, and precisely localize all objects executing a given natural-language-described action, along with their corresponding temporal intervals. We introduce SVAG-Bench, the first large-scale benchmark for this task, comprising nearly 20,000 annotated samples across diverse real-world scenarios. We further design SVAGFormer, a vision-language collaborative framework featuring a spatio-temporal joint grounding mechanism, and release SVAGEval, a standardized evaluation toolkit. Extensive experiments demonstrate that state-of-the-art models exhibit significant performance degradation on SVAG—especially in complex, densely populated scenes—validating the task’s inherent difficulty and highlighting the need for fine-grained spatio-temporal-semantic reasoning capabilities.
📝 Abstract
Understanding fine-grained actions and accurately localizing their corresponding actors in space and time are fundamental capabilities for advancing next-generation AI systems, including embodied agents, autonomous platforms, and human-AI interaction frameworks. Despite recent progress in video understanding, existing methods predominantly address either coarse-grained action recognition or generic object tracking, thereby overlooking the challenge of jointly detecting and tracking multiple objects according to their actions while grounding them temporally. To address this gap, we introduce Spatio-temporal Video Action Grounding (SVAG), a novel task that requires models to simultaneously detect, track, and temporally localize all referent objects in videos based on natural language descriptions of their actions. To support this task, we construct SVAG-Bench, a large-scale benchmark comprising 688 videos, 19,590 annotated records, and 903 unique verbs, covering a diverse range of objects, actions, and real-world scenes. We further propose SVAGFormer, a baseline framework that adapts state of the art vision language models for joint spatial and temporal grounding, and introduce SVAGEval, a standardized evaluation toolkit for fair and reproducible benchmarking. Empirical results show that existing models perform poorly on SVAG, particularly in dense or complex scenes, underscoring the need for more advanced reasoning over fine-grained object-action interactions in long videos.