Tool-Augmented Spatiotemporal Reasoning for Streamlining Video Question Answering Task

📅 2025-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current multimodal large language models (MLLMs) struggle to jointly model intra-frame spatial relationships and inter-frame causal dynamics in complex video question answering. To address this, we propose STAR, a spatiotemporal reasoning framework based on tool-augmented reasoning: it introduces a strategic tool scheduling mechanism and progressive key region localization; constructs an extensible, shortcut-free video toolset; and integrates spatiotemporal disentangled modeling, multi-granularity feature extraction, and sequential tool invocation control. Evaluated on VideoMME and LongVideoBench, STAR achieves improvements of +8.2% and +4.6%, respectively, demonstrating significantly enhanced fine-grained spatiotemporal understanding. The codebase and tool library are publicly released, establishing a novel paradigm for autonomous video-analytic agents.

Technology Category

Application Category

📝 Abstract
Video Question Answering (VideoQA) task serves as a critical playground for evaluating whether foundation models can effectively perceive, understand, and reason about dynamic real-world scenarios. However, existing Multimodal Large Language Models (MLLMs) struggle with simultaneously modeling spatial relationships within video frames and understanding the causal dynamics of temporal evolution on complex and reasoning-intensive VideoQA task. In this work, we equip MLLM with a comprehensive and extensible Video Toolkit, to enhance MLLM's spatiotemporal reasoning capabilities and ensure the harmony between the quantity and diversity of tools. To better control the tool invocation sequence and avoid toolchain shortcut issues, we propose a Spatiotemporal Reasoning Framework (STAR) that strategically schedules temporal and spatial tools, thereby progressively localizing the key area in the video. Our STAR framework enhances GPT-4o using lightweight tools, achieving an 8.2% gain on VideoMME and 4.6% on LongVideoBench. We believe that our proposed Video Toolkit and STAR framework make an important step towards building autonomous and intelligent video analysis assistants. The code is publicly available at https://github.com/fansunqi/VideoTool.
Problem

Research questions and friction points this paper is trying to address.

Enhances spatiotemporal reasoning in VideoQA tasks
Addresses tool invocation and shortcut issues in MLLMs
Improves video analysis with a toolkit and scheduling framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equipping MLLM with a comprehensive Video Toolkit
Proposing a Spatiotemporal Reasoning Framework (STAR)
Scheduling temporal and spatial tools progressively
🔎 Similar Papers
No similar papers found.
Sunqi Fan
Sunqi Fan
Tsinghua University
Computer VisionMachine Learning
J
Jiashuo Cui
BNRist, Department of Computer Science and Technology, Tsinghua University
Meng-Hao Guo
Meng-Hao Guo
Postdoc, Tsinghua University
Foundation ModelsReasoningAgentComputer VisionComputer Graphics
S
Shuojin Yang
BNRist, Department of Computer Science and Technology, Tsinghua University