VideoChat-R1.5: Visual Test-Time Scaling to Reinforce Multimodal Reasoning by Iterative Perception

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) lack dynamic, hierarchical visual attention mechanisms for video understanding. Method: This paper proposes the Vision Test-Time Scaling (VTTS) framework and the Iterative Perception (ITP) mechanism—enabling scalable perception computation during inference for the first time. Integrating reinforcement learning with spatiotemporal supervision, the method models progressive focusing on high-confidence spatiotemporal regions, emulating human-like hierarchical attention. A novel VTTS-80K dataset is constructed to support training and evaluation. Results: VideoChat-R1.5 achieves >5% average improvement across 15+ video understanding benchmarks, significantly outperforming strong baselines such as Qwen2.5VL-3B and Qwen2.5VL-7B. It demonstrates superior generalization in video dialogue and spatiotemporal reasoning tasks, advancing multimodal reasoning from static perception toward dynamic, iterative understanding.

Technology Category

Application Category

📝 Abstract
Inducing reasoning in multimodal large language models (MLLMs) is critical for achieving human-level perception and understanding. Existing methods mainly leverage LLM reasoning to analyze parsed visuals, often limited by static perception stages. This paper introduces Visual Test-Time Scaling (VTTS), a novel approach to enhance MLLMs' reasoning via iterative perception during inference. VTTS mimics humans' hierarchical attention by progressively refining focus on high-confidence spatio-temporal regions, guided by updated textual predictions. Specifically, VTTS employs an Iterative Perception (ITP) mechanism, incorporating reinforcement learning with spatio-temporal supervision to optimize reasoning. To support this paradigm, we also present VTTS-80K, a dataset tailored for iterative perception. These designs allows a MLLM to enhance its performance by increasing its perceptual compute. Extensive experiments validate VTTS's effectiveness and generalization across diverse tasks and benchmarks. Our newly introduced Videochat-R1.5 model has achieved remarkable improvements, with an average increase of over 5%, compared to robust baselines such as Qwen2.5VL-3B and -7B, across more than 15 benchmarks that encompass video conversation, video reasoning, and spatio-temporal perception.
Problem

Research questions and friction points this paper is trying to address.

Enhancing multimodal reasoning through iterative visual perception refinement
Overcoming static perception limitations in multimodal language models
Improving spatio-temporal understanding via hierarchical attention mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Iterative perception mechanism refines visual focus dynamically
Reinforcement learning with spatio-temporal supervision optimizes reasoning
Visual test-time scaling enhances performance via increased perceptual compute
🔎 Similar Papers
No similar papers found.
Z
Ziang Yan
Zhejiang University, Shanghai AI Laboratory
Xinhao Li
Xinhao Li
Nanjing University
Video UnderstandingMultimodal LLMVision-Language Learning
Yinan He
Yinan He
Shanghai Al Laboratory
Zhengrong Yue
Zhengrong Yue
Shanghai Jiao Tong University, PhD
Unified Multimodal ModelingVideo UnderstandingVideo Generation
X
Xiangyu Zeng
Shanghai AI Laboratory, Nanjing University
Y
Yali Wang
Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shanghai AI Laboratory
Y
Yu Qiao
Shanghai AI Laboratory
L
Limin Wang
Shanghai AI Laboratory, Nanjing University
Y
Yi Wang
Shanghai AI Laboratory