🤖 AI Summary
To address coarse temporal localization and loss of fine-grained visual cues in hour-long video question answering, this paper proposes the first multi-agent framework tailored for fine-grained reasoning over long videos. A central LLM orchestrates two specialized agents: a “localization agent” for precise temporal segment retrieval and a “vision agent” for detailed visual understanding—effectively decoupling spatiotemporal grounding from multimodal interpretation. To enhance planning efficiency and interpretability, we introduce a step-constrained PPO-based reinforcement learning mechanism. Our method integrates multimodal large language models, vision-language joint reasoning, accurate temporal localization, and fine-grained visual description generation. Evaluated on the newly constructed long-video QA benchmarks LongTVQA and LongTVQA+, our approach significantly outperforms non-agent-based strong baselines, achieving simultaneous gains in both answer accuracy and inference efficiency.
📝 Abstract
Recent advances in multimodal LLMs and systems that use tools for long-video QA point to the promise of reasoning over hour-long episodes. However, many methods still compress content into lossy summaries or rely on limited toolsets, weakening temporal grounding and missing fine-grained cues. We propose a multi-agent framework in which a master LLM coordinates a grounding agent to localize question-relevant segments and a vision agent to extract targeted textual observations. The master agent plans with a step limit, and is trained with reinforcement learning to encourage concise, correct, and efficient multi-agent cooperation. This design helps the master agent focus on relevant clips via grounding, complements subtitles with visual detail, and yields interpretable trajectories. On our proposed LongTVQA and LongTVQA+ which are episode-level datasets aggregated from TVQA/TVQA+, our multi-agent system significantly outperforms strong non-agent baselines. Experiments also show reinforcement learning further strengthens reasoning and planning for the trained agent. Code and data will be shared at https://longvideoagent.github.io/.