🤖 AI Summary
Video Question Answering (VideoQA) faces two key challenges: difficulty in localizing sparse critical events in long videos and weak causal-temporal reasoning. To address these, we propose a causality-aware fine-grained video understanding framework. First, we leverage large language models to perform causal disambiguation and temporal focus enhancement on question-option pairs, enabling causal-informed query reconstruction. Second, we introduce a temporal grounding module coupled with an adaptive evidence fusion mechanism to precisely localize critical frames and model cross-temporal causal dependencies. Finally, we employ dynamic vision-text fusion and multimodal large language models for answer generation. Our method achieves state-of-the-art performance on NExT-QA, IntentQA, and NExT-GQA, significantly improving accuracy on complex causal-temporal reasoning while maintaining computational efficiency. The core contribution lies in explicitly embedding causal reasoning throughout both query reconstruction and temporal grounding—enabling, for the first time, an end-to-end causally driven VideoQA pipeline.
📝 Abstract
Video Question Answering (VideoQA) requires identifying sparse critical moments in long videos and reasoning about their causal relationships to answer semantically complex questions. While recent advances in multimodal learning have improved alignment and fusion, current approaches remain limited by two prevalent but fundamentally flawed strategies: (1) task-agnostic sampling indiscriminately processes all frames, overwhelming key events with irrelevant content; and (2) heuristic retrieval captures superficial patterns but misses causal-temporal structures needed for complex reasoning. To address these challenges, we introduce LeAdQA, an innovative approach that bridges these gaps through synergizing causal-aware query refinement with fine-grained visual grounding. Our method first leverages LLMs to reformulate question-option pairs, resolving causal ambiguities and sharpening temporal focus. These refined queries subsequently direct a temporal grounding model to precisely retrieve the most salient segments, complemented by an adaptive fusion mechanism dynamically integrating the evidence to maximize relevance. The integrated visual-textual cues are then processed by an MLLM to generate accurate, contextually-grounded answers. Experiments on NExT-QA, IntentQA, and NExT-GQA demonstrate that our method's precise visual grounding substantially enhances the understanding of video-question relationships, achieving state-of-the-art (SOTA) performance on complex reasoning tasks while maintaining computational efficiency.