VideoTIR: Accurate Understanding for Long Videos with Efficient Tool-Integrated Reasoning

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of hallucination in multimodal large language models (MLLMs) for long-form video understanding, which often arises from an imbalance between textual and visual tokens. To mitigate this, the authors propose VideoTIR, a framework that leverages reinforcement learning to dynamically invoke a multi-level toolkit—including key segment, image, or region retrieval—to achieve precise and efficient comprehension. Key innovations include a cold-start strategy combining Zero-RL with supervised fine-tuning (SFT), the Toolkit Action Grouped Policy Optimization (TAGPO) algorithm to enhance tool-calling efficiency, and a sandbox-based trajectory synthesis mechanism for generating high-quality training data. Evaluated on three long-video question-answering benchmarks, VideoTIR significantly improves accuracy and reasoning efficiency while substantially reducing redundant tool invocations.

Technology Category

Application Category

📝 Abstract
Existing Multimodal Large Language Models (MLLMs) often suffer from hallucinations in long video understanding (LVU), primarily due to the imbalance between textual and visual tokens. Observing that MLLMs handle short visual inputs well, recent LVU works alleviate hallucinations by automatically parsing the vast visual data into manageable segments that can be effectively processed by MLLMs. SFT-based tool-calling methods can serve this purpose, but they typically require vast amounts of fine-grained, high-quality data and suffer from constrained tool-calling trajectories. We propose a novel VideoTIR that leverages Reinforcement Learning (RL) to encourage proper usage of comprehensive multi-level toolkits for efficient long video understanding. VideoTIR explores both Zero-RL and SFT cold-starting to enable MLLMs to retrieve and focus on meaningful video segments/images/regions, enhancing long video understanding both accurately and efficiently. To reduce redundant tool-calling, we propose Toolkit Action Grouped Policy Optimization (TAGPO), which enhances the efficiency of the calling process through stepwise reward assignment and reuse of failed rollouts. Additionally, we develop a sandbox-based trajectory synthesis framework to generate high-quality trajectories data. Extensive experiments on three long-video QA benchmarks demonstrate the effectiveness and efficiency of our method.
Problem

Research questions and friction points this paper is trying to address.

long video understanding
hallucination
multimodal large language models
tool-integrated reasoning
visual-textual token imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement Learning
Tool-Integrated Reasoning
Long Video Understanding
Policy Optimization
Trajectory Synthesis
🔎 Similar Papers