Enhancing Long Video Question Answering with Scene-Localized Frame Grouping

📅 2025-08-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) struggle with long-video understanding due to computational constraints, limiting frame coverage and causing critical scene information loss. Moreover, prevailing benchmarks emphasize sparse target-frame localization, diverging from real-world fine-grained comprehension requirements. To address this, we propose SceneQA—a scenarized video question answering task—and introduce the “Scene-Localized Frame Grouping” (SLFG) paradigm: it localizes semantic scenes and dynamically reorganizes discrete frames into coherent scene units without modifying model architectures, enabling plug-and-play deployment. Based on SLFG, we construct the LVSQA benchmark, designed to better reflect practical application scenarios. Experiments demonstrate that SLFG consistently improves MLLM performance across multiple long-video QA benchmarks, validating its effectiveness in scene-aware reasoning, temporal coherence modeling, and cross-dataset generalization.

Technology Category

Application Category

📝 Abstract
Current Multimodal Large Language Models (MLLMs) often perform poorly in long video understanding, primarily due to resource limitations that prevent them from processing all video frames and their associated information. Efficiently extracting relevant information becomes a challenging task. Existing frameworks and evaluation tasks focus on identifying specific frames containing core objects from a large number of irrelevant frames, which does not align with the practical needs of real-world applications. To address this issue, we propose a new scenario under the video question-answering task, SceneQA, which emphasizes scene-based detail perception and reasoning abilities. And we develop the LVSQA dataset to support the SceneQA task, which is built upon carefully selected videos from LVBench and contains a new collection of question-answer pairs to promote a more fair evaluation of MLLMs' scene perception abilities in long videos. Inspired by human cognition, we introduce a novel method called SLFG. The core idea of SLFG is to combine individual frames into semantically coherent scene frames. By leveraging scene localization methods and dynamic frame reassembly mechanisms, SLFG significantly enhances the understanding capabilities of existing MLLMs in long videos. SLFG requires no modification to the original model architecture and boasts excellent plug-and-play usability. Experimental results show that this method performs exceptionally well in several long video benchmark tests. Code and dataset will be released at http://www.slfg.pkuzwh.cn.
Problem

Research questions and friction points this paper is trying to address.

Improving long video understanding in MLLMs
Addressing inefficient frame processing in videos
Enhancing scene perception for video QA tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scene-localized frame grouping for video understanding
Dynamic frame reassembly enhances MLLM capabilities
Plug-and-play usability without model modification
🔎 Similar Papers
No similar papers found.
X
Xuyi Yang
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China
W
Wenhao Zhang
School of Electronic and Computer Engineering, Peking University, Shenzhen, China
Hongbo Jin
Hongbo Jin
Peking University
LLMvideo LLM3D LLM
L
Lin Liu
School of Computer Science, Wuhan University, Wuhan, China
H
Hongbo Xu
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China
Yongwei Nie
Yongwei Nie
South China University of Technology
Computer GraphicsComputer Vision
F
Fei Yu
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China
F
Fei Ma
Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China