🤖 AI Summary
To address the dual challenges of limited context length in multimodal large models (MLMs) and high computational cost from dense frame sampling in long-video understanding, this paper proposes a query-aware adaptive frame selection framework. The method explicitly distinguishes between global queries—requiring holistic semantic understanding—and local queries—demanding fine-grained temporal localization—for which it employs training-free uniform sampling and a lightweight relevance-based frame extraction mechanism, respectively, avoiding generic, computationally expensive search strategies. Crucially, the framework requires no model fine-tuning and dynamically routes queries based on type identification, achieving high localization accuracy at low computational overhead. Evaluated on three mainstream long-video understanding benchmarks, our approach consistently outperforms existing state-of-the-art methods even when processing only 256 input frames, demonstrating both efficiency and strong generalization across diverse query types and datasets.
📝 Abstract
The application of Large Multimodal Models (LMMs) to long-form video understanding is constrained by limited context lengths and the computationally prohibitive cost of processing dense video tokens. Consequently, recent research has focused on query-aware frame selection, methods that often incur significant computational overhead. This paper challenges the assumption that such complex search mechanisms are universally necessary. We first identify and validate a query typology distinguishing between global query and localized query. We demonstrate that while uniform sampling is both effective and efficient for global queries, localized queries indeed necessitate query-aware selection for optimal performance. Building on this insight, we propose DIG, a training-free frame selection framework that adapts its strategy based on the query type. Specifically,DIG employs efficient uniform sampling for global queries while activating a specialized pipeline to extract query-relevant frames for localized queries. Experiments on three long-form video understanding benchmarks demonstrate that DIG consistently outperforms existing baselines and robustly improves LMM performance, even when scaling the input frame count to 256.