Look-Back: Implicit Visual Re-focusing in MLLM Reasoning

📅 2025-07-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multimodal large language models (MLLMs) commonly suffer from insufficient visual information utilization and excessive reliance on textual cues during late-stage reasoning. To address this, we propose an **implicit visual re-attention mechanism** that requires no architectural modification or additional inputs. Instead, it analyzes the model’s attention patterns to generate lightweight, interpretable guidance signals, enabling the model to autonomously and dynamically refocus on salient visual regions. This mechanism endows MLLMs with adaptive decision-making capabilities—determining *when*, *where*, and *how* to revisit visual content—thereby facilitating more robust multimodal fusion and reasoning. Extensive experiments on major benchmarks—including MMBench, OCRBench, and TextVQA—demonstrate consistent improvements in both general reasoning and fine-grained visual perception tasks. The method exhibits strong effectiveness, cross-dataset generalizability, and deployment efficiency, requiring only inference-time attention analysis without retraining or parameter updates.

Technology Category

Application Category

📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in multimodal reasoning. However, they often excessively rely on textual information during the later stages of inference, neglecting the crucial integration of visual input. Current methods typically address this by explicitly injecting visual information to guide the reasoning process. In this work, through an analysis of MLLM attention patterns, we made an intriguing observation: with appropriate guidance, MLLMs can spontaneously re-focus their attention on visual inputs during the later stages of reasoning, even without explicit visual information injection. This spontaneous shift in focus suggests that MLLMs are intrinsically capable of performing visual fusion reasoning. Building on this insight, we introduce Look-Back, an implicit approach designed to guide MLLMs to ``look back" at visual information in a self-directed manner during reasoning. Look-Back empowers the model to autonomously determine when, where, and how to re-focus on visual inputs, eliminating the need for explicit model-structure constraints or additional input. We demonstrate that Look-Back significantly enhances the model's reasoning and perception capabilities, as evidenced by extensive empirical evaluations on multiple multimodal benchmarks.
Problem

Research questions and friction points this paper is trying to address.

MLLMs overly rely on text, neglecting visual integration
Current methods inject visual info explicitly, needing improvement
Proposing Look-Back for implicit self-directed visual re-focusing
Innovation

Methods, ideas, or system contributions that make the work stand out.

Implicit visual re-focusing in MLLMs
Self-directed attention on visual inputs
No explicit model-structure constraints needed
🔎 Similar Papers
No similar papers found.
S
Shuo Yang
Peking University, Shenzhen Graduate School
Yuwei Niu
Yuwei Niu
Chongqing university
Visual RepresentationsLanguage Priors
Y
Yuyang Liu
Peking University, Shenzhen Graduate School
Y
Yang Ye
Peking University, Shenzhen Graduate School
B
Bin Lin
Peking University, Shenzhen Graduate School
Li Yuan
Li Yuan
Research Associate, University of Science & Technology of China (USTC)
Antibiotic resistanceWastewater treatmentEnvironmental bioremediationAnaerobic digestionFate of organic pollutants