🤖 AI Summary
Current large multimodal models (LMMs) suffer from limited visual tool spaces and task-specific workflows, hindering fine-grained image interaction and long-horizon reasoning. To address this, we propose a vision-centric interactive reasoning paradigm, introducing a “data evolution flywheel” that automatically generates high-quality, multi-difficulty interactive reasoning data. We further design a vision-progressive training curriculum integrating point-level supervision with a two-stage reinforcement learning framework, enabling end-to-end optimization of perceptual alignment and interactive reasoning. Our method substantially improves both general and interactive reasoning capabilities, achieving comprehensive superiority over state-of-the-art LMMs on our expert-validated benchmark VTBench—a newly constructed evaluation suite for visual interactive reasoning. This work establishes a new benchmark and a scalable technical pathway for image-driven interactive cognitive modeling.
📝 Abstract
Empowering Large Multimodal Models (LMMs) to deeply integrate image interaction with long-horizon reasoning capabilities remains a long-standing challenge in this field. Recent advances in vision-centric reasoning explore a promising"Thinking with Images"paradigm for LMMs, marking a shift from image-assisted reasoning to image-interactive thinking. While this milestone enables models to focus on fine-grained image regions, progress remains constrained by limited visual tool spaces and task-specific workflow designs. To bridge this gap, we present V-Thinker, a general-purpose multimodal reasoning assistant that enables interactive, vision-centric thinking through end-to-end reinforcement learning. V-Thinker comprises two key components: (1) a Data Evolution Flywheel that automatically synthesizes, evolves, and verifies interactive reasoning datasets across three dimensions-diversity, quality, and difficulty; and (2) a Visual Progressive Training Curriculum that first aligns perception via point-level supervision, then integrates interactive reasoning through a two-stage reinforcement learning framework. Furthermore, we introduce VTBench, an expert-verified benchmark targeting vision-centric interactive reasoning tasks. Extensive experiments demonstrate that V-Thinker consistently outperforms strong LMM-based baselines in both general and interactive reasoning scenarios, providing valuable insights for advancing image-interactive reasoning applications.