LeAdQA: LLM-Driven Context-Aware Temporal Grounding for Video Question Answering

📅 2025-07-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Video Question Answering (VideoQA) faces two key challenges: difficulty in localizing sparse critical events in long videos and weak causal-temporal reasoning. To address these, we propose a causality-aware fine-grained video understanding framework. First, we leverage large language models to perform causal disambiguation and temporal focus enhancement on question-option pairs, enabling causal-informed query reconstruction. Second, we introduce a temporal grounding module coupled with an adaptive evidence fusion mechanism to precisely localize critical frames and model cross-temporal causal dependencies. Finally, we employ dynamic vision-text fusion and multimodal large language models for answer generation. Our method achieves state-of-the-art performance on NExT-QA, IntentQA, and NExT-GQA, significantly improving accuracy on complex causal-temporal reasoning while maintaining computational efficiency. The core contribution lies in explicitly embedding causal reasoning throughout both query reconstruction and temporal grounding—enabling, for the first time, an end-to-end causally driven VideoQA pipeline.

Technology Category

Application Category

📝 Abstract
Video Question Answering (VideoQA) requires identifying sparse critical moments in long videos and reasoning about their causal relationships to answer semantically complex questions. While recent advances in multimodal learning have improved alignment and fusion, current approaches remain limited by two prevalent but fundamentally flawed strategies: (1) task-agnostic sampling indiscriminately processes all frames, overwhelming key events with irrelevant content; and (2) heuristic retrieval captures superficial patterns but misses causal-temporal structures needed for complex reasoning. To address these challenges, we introduce LeAdQA, an innovative approach that bridges these gaps through synergizing causal-aware query refinement with fine-grained visual grounding. Our method first leverages LLMs to reformulate question-option pairs, resolving causal ambiguities and sharpening temporal focus. These refined queries subsequently direct a temporal grounding model to precisely retrieve the most salient segments, complemented by an adaptive fusion mechanism dynamically integrating the evidence to maximize relevance. The integrated visual-textual cues are then processed by an MLLM to generate accurate, contextually-grounded answers. Experiments on NExT-QA, IntentQA, and NExT-GQA demonstrate that our method's precise visual grounding substantially enhances the understanding of video-question relationships, achieving state-of-the-art (SOTA) performance on complex reasoning tasks while maintaining computational efficiency.
Problem

Research questions and friction points this paper is trying to address.

Identifying sparse critical moments in long videos
Resolving causal ambiguities in complex questions
Improving visual grounding for accurate VideoQA
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven causal-aware query refinement
Fine-grained visual temporal grounding
Adaptive multimodal fusion mechanism
🔎 Similar Papers
2024-08-08International Journal of Computer VisionCitations: 13
X
Xinxin Dong
National University of Defense Technology
Baoyun Peng
Baoyun Peng
Academy of Military Science
Multimodal understandingAutonomous drivingKnowledge GraphNatural Language Processing
Haokai Ma
Haokai Ma
Postdoctoral Research Fellow, National University of Singapore
Cross-domain RecommendationLLM for Cybersecurity
Y
Yufei Wang
National University of Defense Technology
Zixuan Dong
Zixuan Dong
New York University
Reinforcement LearningDeep LearningNeural Collapse
F
Fei Hu
National University of Defense Technology
X
Xiaodong Wang
National University of Defense Technology