🤖 AI Summary
Current video large language models (VLLMs) typically employ low-frame-rate sampling, discarding dense temporal information and thus struggling with tasks requiring precise temporal alignment (e.g., lecture comprehension). To address this, we propose GRT—a novel framework—and DIVE, the first benchmark explicitly designed for high-frame-rate, fine-grained temporal reasoning. GRT integrates motion compensation with semantic scene modeling: it leverages optical-flow-guided motion estimation, gated cross-frame tokenization, and semantic-region-aware token merging, all orchestrated via a two-stage tokenization mechanism that ensures sublinear growth in token count—substantially reducing computational redundancy and token overhead. Evaluated on DIVE, GRT outperforms mainstream large models despite its smaller parameter count, and its performance consistently improves with increasing frame rate—demonstrating the effectiveness, efficiency, and scalability of dense temporal modeling.
📝 Abstract
High temporal resolution is essential for capturing fine-grained details in video understanding. However, current video large language models (VLLMs) and benchmarks mostly rely on low-frame-rate sampling, such as uniform sampling or keyframe selection, discarding dense temporal information. This compromise avoids the high cost of tokenizing every frame, which otherwise leads to redundant computation and linear token growth as video length increases. While this trade-off works for slowly changing content, it fails for tasks like lecture comprehension, where information appears in nearly every frame and requires precise temporal alignment. To address this gap, we introduce Dense Video Understanding (DVU), which enables high-FPS video comprehension by reducing both tokenization time and token overhead. Existing benchmarks are also limited, as their QA pairs focus on coarse content changes. We therefore propose DIVE (Dense Information Video Evaluation), the first benchmark designed for dense temporal reasoning. To make DVU practical, we present Gated Residual Tokenization (GRT), a two-stage framework: (1) Motion-Compensated Inter-Gated Tokenization uses pixel-level motion estimation to skip static regions during tokenization, achieving sub-linear growth in token count and compute. (2) Semantic-Scene Intra-Tokenization Merging fuses tokens across static regions within a scene, further reducing redundancy while preserving dynamic semantics. Experiments on DIVE show that GRT outperforms larger VLLM baselines and scales positively with FPS. These results highlight the importance of dense temporal information and demonstrate that GRT enables efficient, scalable high-FPS video understanding.