🤖 AI Summary
Existing multimodal large language models are constrained by limited context length, making it challenging to accurately localize sparse yet critical query-relevant segments within long videos. To address this, this work proposes a novel approach that integrates query-segment relevance with temporal dependencies among segments, constructing a visual-temporal affinity graph and introducing an iterative hypothesis-verification-refinement mechanism for global clue localization. This method is the first to jointly leverage external query signals and the intrinsic structural cues of the video, thereby overcoming the limitations of conventional local grounding strategies. Experimental results demonstrate significant performance gains across multiple established benchmarks, with accuracy improvements of up to 7.5% on VideoMME-long.
📝 Abstract
Long video understanding remains challenging for multimodal large language models (MLLMs) due to limited context windows, which necessitate identifying sparse query-relevant video segments. However, existing methods predominantly localize clues based solely on the query, overlooking the video's intrinsic structure and varying relevance across segments. To address this, we propose VideoDetective, a framework that integrates query-to-segment relevance and inter-segment affinity for effective clue hunting in long-video question answering. Specifically, we divide a video into various segments and represent them as a visual-temporal affinity graph built from visual similarity and temporal proximity. We then perform a Hypothesis-Verification-Refinement loop to estimate relevance scores of observed segments to the query and propagate them to unseen segments, yielding a global relevance distribution that guides the localization of the most critical segments for final answering with sparse observation. Experiments show our method consistently achieves substantial gains across a wide range of mainstream MLLMs on representative benchmarks, with accuracy improvements of up to 7.5% on VideoMME-long. Our code is available at https://videodetective.github.io/