🤖 AI Summary
This work addresses the limitations of current large vision-language models, which rely on single-pass visual encoding and struggle to preserve fine-grained details or achieve effective cross-modal alignment in multi-region or multi-image scenarios. To overcome these challenges, the authors propose a "Conversing with Images" framework that reformulates visual operations as a language-guided feature modulation mechanism. This approach enables tight coupling between linguistic reasoning and visual state updates through dynamic joint re-encoding of multiple image regions. The framework is instantiated with a novel dynamic visual encoder, ViLaVT, and trained via a two-stage curriculum strategy combining supervised fine-tuning and reinforcement learning. Extensive experiments demonstrate consistent and significant performance gains across eight benchmarks, with particularly strong results on complex multi-image and video-based spatial reasoning tasks.
📝 Abstract
Current large vision-language models (LVLMs) typically rely on text-only reasoning based on a single-pass visual encoding, which often leads to loss of fine-grained visual information. Recently the proposal of''thinking with images''attempts to alleviate this limitation by manipulating images via external tools or code; however, the resulting visual states are often insufficiently grounded in linguistic semantics, impairing effective cross-modal alignment - particularly when visual semantics or geometric relationships must be reasoned over across distant regions or multiple images. To address these challenges, we propose''chatting with images'', a new framework that reframes visual manipulation as language-guided feature modulation. Under the guidance of expressive language prompts, the model dynamically performs joint re-encoding over multiple image regions, enabling tighter coupling between linguistic reasoning and visual state updates. We instantiate this paradigm in ViLaVT, a novel LVLM equipped with a dynamic vision encoder explicitly designed for such interactive visual reasoning, and trained it with a two-stage curriculum combining supervised fine-tuning and reinforcement learning to promote effective reasoning behaviors. Extensive experiments across eight benchmarks demonstrate that ViLaVT achieves strong and consistent improvements, with particularly pronounced gains on complex multi-image and video-based spatial reasoning tasks.