🤖 AI Summary
Existing video multimodal large language models (MLLMs) struggle to jointly model fine-grained spatiotemporal localization and achieve cross-modal coordinate alignment. To address this, we propose the first MLLM explicitly designed for fine-grained spatiotemporal understanding. Our method introduces three core innovations: (1) a language-aligned positional embedding mechanism that bridges textual and geometric coordinate spaces; (2) a point-to-region dual-stream attention architecture with spatiotemporal-decoupled feature compression; and (3) a progressive coarse-to-fine alignment training paradigm leveraging the ST-Align dataset (4.3M samples). We further establish the ST-Align benchmark framework, covering spatiotemporal localization, event description, and spatial grounding tasks. Extensive evaluation across 11 fine-grained spatiotemporal understanding benchmarks demonstrates state-of-the-art performance, significantly advancing language-vision spatiotemporal joint reasoning capabilities.
📝 Abstract
Recent advancements in multimodal large language models (MLLMs) have shown promising results, yet existing approaches struggle to effectively handle both temporal and spatial localization simultaneously. This challenge stems from two key issues: first, incorporating spatial-temporal localization introduces a vast number of coordinate combinations, complicating the alignment of linguistic and visual coordinate representations; second, encoding fine-grained temporal and spatial information during video feature compression is inherently difficult. To address these issues, we propose LLaVA-ST, a MLLM for fine-grained spatial-temporal multimodal understanding. In LLaVA-ST, we propose Language-Aligned Positional Embedding, which embeds the textual coordinate special token into the visual space, simplifying the alignment of fine-grained spatial-temporal correspondences. Additionally, we design the Spatial-Temporal Packer, which decouples the feature compression of temporal and spatial resolutions into two distinct point-to-region attention processing streams. Furthermore, we propose ST-Align dataset with 4.3M training samples for fine-grained spatial-temporal multimodal understanding. With ST-align, we present a progressive training pipeline that aligns the visual and textual feature through sequential coarse-to-fine stages.Additionally, we introduce an ST-Align benchmark to evaluate spatial-temporal interleaved fine-grained understanding tasks, which include Spatial-Temporal Video Grounding (STVG) , Event Localization and Captioning (ELC) and Spatial Video Grounding (SVG). LLaVA-ST achieves outstanding performance on 11 benchmarks requiring fine-grained temporal, spatial, or spatial-temporal interleaving multimodal understanding. Our code, data and benchmark will be released at Our code, data and benchmark will be released at https://github.com/appletea233/LLaVA-ST .