🤖 AI Summary
Multimodal large language models (MLLMs) commonly suffer from insufficient visual information utilization and excessive reliance on textual cues during late-stage reasoning. To address this, we propose an **implicit visual re-attention mechanism** that requires no architectural modification or additional inputs. Instead, it analyzes the model’s attention patterns to generate lightweight, interpretable guidance signals, enabling the model to autonomously and dynamically refocus on salient visual regions. This mechanism endows MLLMs with adaptive decision-making capabilities—determining *when*, *where*, and *how* to revisit visual content—thereby facilitating more robust multimodal fusion and reasoning. Extensive experiments on major benchmarks—including MMBench, OCRBench, and TextVQA—demonstrate consistent improvements in both general reasoning and fine-grained visual perception tasks. The method exhibits strong effectiveness, cross-dataset generalizability, and deployment efficiency, requiring only inference-time attention analysis without retraining or parameter updates.
📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in multimodal reasoning. However, they often excessively rely on textual information during the later stages of inference, neglecting the crucial integration of visual input. Current methods typically address this by explicitly injecting visual information to guide the reasoning process. In this work, through an analysis of MLLM attention patterns, we made an intriguing observation: with appropriate guidance, MLLMs can spontaneously re-focus their attention on visual inputs during the later stages of reasoning, even without explicit visual information injection. This spontaneous shift in focus suggests that MLLMs are intrinsically capable of performing visual fusion reasoning. Building on this insight, we introduce Look-Back, an implicit approach designed to guide MLLMs to ``look back" at visual information in a self-directed manner during reasoning. Look-Back empowers the model to autonomously determine when, where, and how to re-focus on visual inputs, eliminating the need for explicit model-structure constraints or additional input. We demonstrate that Look-Back significantly enhances the model's reasoning and perception capabilities, as evidenced by extensive empirical evaluations on multiple multimodal benchmarks.