π€ AI Summary
Existing long-video understanding benchmarks (e.g., Video-MME, MLVU) rely on uniform frame sampling, which often discards semantically critical frames and severely compromises the evaluation accuracy of multimodal large language models (MLLMs). To address this, we propose RAG-Adapterβa plug-and-play framework introducing the first retrieval-augmented, evaluation-oriented frame sampling paradigm. It employs multimodal frame-question relevance modeling and a dynamic sampling strategy to enable question-aware, adaptive frame selection. Additionally, we design Group-wise Contrastive Learning (GCL) to optimize sampling quality on our newly constructed MMAT dataset. Evaluated on Video-MME and other benchmarks, RAG-Adapter boosts GPT-4oβs accuracy by 9.3%, substantially improving evaluation reliability. This work establishes a more precise, reproducible standard for assessing long-video understanding capabilities.
π Abstract
Multi-modal Large Language Models (MLLMs) capable of video understanding are advancing rapidly. To effectively assess their video comprehension capabilities, long video understanding benchmarks, such as Video-MME and MLVU, are proposed. However, these benchmarks directly use uniform frame sampling for testing, which results in significant information loss and affects the accuracy of the evaluations in reflecting the true abilities of MLLMs. To address this, we propose RAG-Adapter, a plug-and-play framework that reduces information loss during testing by sampling frames most relevant to the given question. Additionally, we introduce a Grouped-supervised Contrastive Learning (GCL) method to further enhance sampling effectiveness of RAG-Adapter through fine-tuning on our constructed MMAT dataset. Finally, we test numerous baseline MLLMs on various video understanding benchmarks, finding that RAG-Adapter sampling consistently outperforms uniform sampling (e.g., Accuracy of GPT-4o increases by 9.3 percent on Video-MME), providing a more accurate testing method for long video benchmarks.