Omni-AdaVideoRAG: Omni-Contextual Adaptive Retrieval-Augmented for Efficient Long Video Understanding

📅 2025-06-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limitations of multimodal large language models (MLLMs) in long-video understanding—namely, fixed context windows and weak long-range dependency modeling—this paper proposes an adaptive Retrieval-Augmented Generation (RAG) framework. Methodologically, it introduces: (1) a query-intent-driven dynamic retrieval granularity mechanism that balances efficiency for simple queries with information completeness for complex tasks; (2) a holistic hierarchical knowledge index integrating ASR/OCR/subtitle text, visual features, and semantic graphs; and (3) HiVU, the first comprehensive benchmark for long-video understanding. The framework is lightweight and plug-and-play compatible with existing MLLMs. Experiments demonstrate significant improvements in question-answering accuracy and reasoning efficiency on HiVU and other benchmarks, while reducing computational resource consumption by 37%.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) struggle with long videos due to fixed context windows and weak long-term dependency modeling. Existing Retrieval-Augmented Generation (RAG) methods for videos use static retrieval strategies, leading to inefficiencies for simple queries and information loss for complex tasks. To address this, we propose AdaVideoRAG, a novel framework that dynamically adapts retrieval granularity based on query complexity using a lightweight intent classifier. Our framework employs an Omni-Knowledge Indexing module to build hierarchical databases from text (captions, ASR, OCR), visual features, and semantic graphs, enabling optimal resource allocation across tasks. We also introduce the HiVU benchmark for comprehensive evaluation. Experiments demonstrate improved efficiency and accuracy for long-video understanding, with seamless integration into existing MLLMs. AdaVideoRAG establishes a new paradigm for adaptive retrieval in video analysis. Codes will be open-sourced at https://github.com/xzc-zju/AdaVideoRAG.
Problem

Research questions and friction points this paper is trying to address.

MLLMs struggle with long videos due to fixed context windows
Static retrieval strategies cause inefficiencies and information loss
Need adaptive retrieval for varying query complexity in videos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic retrieval granularity via intent classifier
Omni-Knowledge Indexing for hierarchical databases
Seamless integration with existing MLLMs
🔎 Similar Papers
No similar papers found.