Don't Look Only Once: Towards Multimodal Interactive Reasoning with Selective Visual Revisitation

📅 2025-05-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal large language models (MLLMs) process visual inputs only once during inference, relying solely on internal memory for subsequent reasoning—limiting fine-grained visual referencing and multi-step collaborative reasoning. To address this, we propose the v1 framework, the first to enable selective, dynamic re-attention to image regions during inference: (i) a point-and-copy mechanism supports hypothesis-driven, real-time retrieval of visual tokens; (ii) we introduce v1g, the first 300K-sample multimodal reasoning trajectory dataset with interleaved visual localization annotations; and (iii) we jointly optimize multi-step mathematical visual reasoning via lightweight architectural modifications, visual token context re-retrieval, and interleaved localization modeling. Evaluated on MathVista, MathVision, and MathVerse, v1 significantly outperforms strong baselines—particularly improving fine-grained visual reference accuracy and multi-step reasoning capability.

Technology Category

Application Category

📝 Abstract
We present v1, a lightweight extension to Multimodal Large Language Models (MLLMs) that enables selective visual revisitation during inference. While current MLLMs typically consume visual input only once and reason purely over internal memory, v1 introduces a simple point-and-copy mechanism that allows the model to dynamically retrieve relevant image regions throughout the reasoning process. This mechanism augments existing architectures with minimal modifications, enabling contextual access to visual tokens based on the model's evolving hypotheses. To train this capability, we construct v1g, a dataset of 300K multimodal reasoning traces with interleaved visual grounding annotations. Experiments on three multimodal mathematical reasoning benchmarks -- MathVista, MathVision, and MathVerse -- demonstrate that v1 consistently improves performance over comparable baselines, particularly on tasks requiring fine-grained visual reference and multi-step reasoning. Our results suggest that dynamic visual access is a promising direction for enhancing grounded multimodal reasoning. Code, models, and data will be released to support future research.
Problem

Research questions and friction points this paper is trying to address.

Enables selective visual revisitation in MLLMs
Improves fine-grained visual reference in reasoning
Enhances multi-step multimodal mathematical reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Selective visual revisitation for MLLMs
Point-and-copy mechanism for dynamic retrieval
Minimal architectural changes for contextual access
🔎 Similar Papers
No similar papers found.