🤖 AI Summary
This work addresses the limitations of existing video large language models in jointly modeling spatial and temporal information, which hinders fine-grained spatiotemporal grounding. To overcome this, we propose VideoLoom—the first unified video large language model framework capable of fine-grained spatiotemporal localization—and introduce LoomData-8.7k, the first large-scale, human-centric dataset with precise spatiotemporal annotations, enabling end-to-end training. We also design LoomBench, a comprehensive evaluation benchmark for spatiotemporal understanding. Experimental results demonstrate that VideoLoom achieves state-of-the-art or highly competitive performance across multiple tasks, including 63.1 J&F on ReVOS and 48.3 R1@0.7 on Charades-STA.
📝 Abstract
This paper presents VideoLoom, a unified Video Large Language Model (Video LLM) for joint spatial-temporal understanding. To facilitate the development of fine-grained spatial and temporal localization capabilities, we curate LoomData-8.7k, a human-centric video dataset with temporally grounded and spatially localized captions. With this, VideoLoom achieves state-of-the-art or highly competitive performance across a variety of spatial and temporal benchmarks (e.g., 63.1 J&F on ReVOS for referring video object segmentation, and 48.3 R1@0.7 on Charades-STA for temporal grounding). In addition, we introduce LoomBench, a novel benchmark consisting of temporal, spatial, and compositional video-question pairs, enabling a comprehensive evaluation of Video LLMs from diverse aspects. Collectively, these contributions offer a universal and effective suite for joint spatial-temporal video understanding, setting a new standard in multimodal intelligence.