🤖 AI Summary
Multimodal large language models (MLLMs) lack dynamic, hierarchical visual attention mechanisms for video understanding. Method: This paper proposes the Vision Test-Time Scaling (VTTS) framework and the Iterative Perception (ITP) mechanism—enabling scalable perception computation during inference for the first time. Integrating reinforcement learning with spatiotemporal supervision, the method models progressive focusing on high-confidence spatiotemporal regions, emulating human-like hierarchical attention. A novel VTTS-80K dataset is constructed to support training and evaluation. Results: VideoChat-R1.5 achieves >5% average improvement across 15+ video understanding benchmarks, significantly outperforming strong baselines such as Qwen2.5VL-3B and Qwen2.5VL-7B. It demonstrates superior generalization in video dialogue and spatiotemporal reasoning tasks, advancing multimodal reasoning from static perception toward dynamic, iterative understanding.
📝 Abstract
Inducing reasoning in multimodal large language models (MLLMs) is critical for achieving human-level perception and understanding. Existing methods mainly leverage LLM reasoning to analyze parsed visuals, often limited by static perception stages. This paper introduces Visual Test-Time Scaling (VTTS), a novel approach to enhance MLLMs' reasoning via iterative perception during inference. VTTS mimics humans' hierarchical attention by progressively refining focus on high-confidence spatio-temporal regions, guided by updated textual predictions. Specifically, VTTS employs an Iterative Perception (ITP) mechanism, incorporating reinforcement learning with spatio-temporal supervision to optimize reasoning. To support this paradigm, we also present VTTS-80K, a dataset tailored for iterative perception. These designs allows a MLLM to enhance its performance by increasing its perceptual compute. Extensive experiments validate VTTS's effectiveness and generalization across diverse tasks and benchmarks. Our newly introduced Videochat-R1.5 model has achieved remarkable improvements, with an average increase of over 5%, compared to robust baselines such as Qwen2.5VL-3B and -7B, across more than 15 benchmarks that encompass video conversation, video reasoning, and spatio-temporal perception.