🤖 AI Summary
This work addresses the unclear mechanisms of vision–language integration in current multimodal large language models (MLLMs). Through layer-wise masking analysis and attention evolution tracking, the study systematically reveals for the first time that cross-modal fusion predominantly occurs in specific layers and identifies a late-stage “retrospective” reactivation of visual signals. Building on these insights, the authors propose a training-free contrastive attention framework that guides the model to enhance meaningful cross-modal attention transfer. Extensive experiments across diverse mainstream MLLM architectures and multimodal benchmarks demonstrate the effectiveness of the proposed mechanism, yielding significant improvements in multimodal reasoning performance.
📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable progress in vision-language understanding, yet how they internally integrate visual and textual information remains poorly understood. To bridge this gap, we perform a systematic layer-wise masking analysis across multiple architectures, revealing how visual-text fusion evolves within MLLMs. The results show that fusion emerges at several specific layers rather than being uniformly distributed across the network, and certain models exhibit a late-stage"review"phenomenon where visual signals are reactivated before output generation. Besides, we further analyze layer-wise attention evolution and observe persistent high-attention noise on irrelevant regions, along with gradually increasing attention on text-aligned areas. Guided by these insights, we introduce a training-free contrastive attention framework that models the transformation between early fusion and final layers to highlight meaningful attention shifts. Extensive experiments across various MLLMs and benchmarks validate our analysis and demonstrate that the proposed approach improves multimodal reasoning performance. Code will be released.