🤖 AI Summary
To address key bottlenecks in long-video understanding by multimodal large language models (MLLMs)—including weak cross-modal interaction, severe hallucination, and imbalanced multi-task difficulty—this paper proposes VITAL. First, it introduces a tool-augmented multimodal reasoning mechanism: visual tools dynamically sample salient frames and generate multimodal chain-of-thought (MoT-CoT) reasoning, strengthening vision–language alignment. Second, it constructs two large-scale, multi-task video datasets—MTVR-CoT-72k and MTVR-RL-110k—to support supervised and reinforcement learning. Third, it proposes Difficulty-Guided Reinforcement Policy Optimization (DGRPO), a novel RL algorithm that jointly optimizes question answering and temporal grounding via difficulty-aware reward shaping. Evaluated on 11 diverse video understanding benchmarks, VITAL achieves state-of-the-art performance across the board, with particularly significant gains on long-video QA and fine-grained temporal localization tasks.
📝 Abstract
The video reasoning ability of multimodal large language models (MLLMs) is crucial for downstream tasks like video question answering and temporal grounding. While recent approaches have explored text-based chain-of-thought (CoT) reasoning for MLLMs, these methods often suffer from limited cross-modal interaction and increased hallucination, especially with longer videos or reasoning chains. To address these challenges, we propose Video Intelligence via Tool-Augmented Learning (VITAL), a novel end-to-end agentic video reasoning framework. With a visual toolbox, the model can densely sample new video frames on demand and generate multimodal CoT for precise long video reasoning. We observe that temporal grounding and question answering are mutually beneficial for video understanding tasks. Therefore, we construct two high-quality multi-task video reasoning datasets MTVR-CoT-72k for supervised fine-tuning and MTVR-RL-110k for reinforcement learning. Moreover, we propose a Difficulty-aware Group Relative Policy Optimization algorithm (DGRPO) to mitigate difficulty imbalance in multi-task reinforcement learning. Extensive experiments on 11 challenging video understanding benchmarks demonstrate the advanced reasoning ability of VITAL, outperforming existing methods in video question answering and temporal grounding tasks, especially in long video scenarios. All code, data and model weight will be made publicly available.