🤖 AI Summary
To address the problem of language priors dominating video question answering (VQA) models while neglecting visual evidence, this paper proposes a multi-agent collaborative reasoning framework. It jointly models a temporally aligned localization agent and a QA agent, and introduces a reflective agent that critically evaluates and fuses cross-path outputs, thereby tightly integrating answer prediction with visual grounding. Built upon a 2B–7B-parameter multimodal architecture, the framework incorporates video grounding, multi-step reasoning, and reflective aggregation modules. On NExT-GQA and DeVE-QA, it achieves 30.3% and 47.4% Acc@GQA, respectively—outperforming all existing 7B-scale models and establishing new state-of-the-art results. Moreover, it significantly improves grounding fidelity and prediction interpretability.
📝 Abstract
Grounded Video Question Answering (Grounded VideoQA) requires aligning textual answers with explicit visual evidence. However, modern multimodal models often rely on linguistic priors and spurious correlations, resulting in poorly grounded predictions. In this work, we propose MUPA, a cooperative MUlti-Path Agentic approach that unifies video grounding, question answering, answer reflection and aggregation to tackle Grounded VideoQA. MUPA features three distinct reasoning paths on the interplay of grounding and QA agents in different chronological orders, along with a dedicated reflection agent to judge and aggregate the multi-path results to accomplish consistent QA and grounding. This design markedly improves grounding fidelity without sacrificing answer accuracy. Despite using only 2B parameters, our method outperforms all 7B-scale competitors. When scaled to 7B parameters, MUPA establishes new state-of-the-art results, with Acc@GQA of 30.3% and 47.4% on NExT-GQA and DeVE-QA respectively, demonstrating MUPA' effectiveness towards trustworthy video-language understanding. Our code is available in https://github.com/longmalongma/MUPA.