Interpreting Attention Heads for Image-to-Text Information Flow in Large Vision-Language Models

📅 2025-09-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In large vision-language models (LVLMs), the image-to-text information transfer mechanism remains poorly interpretable due to parallel multi-head attention. To address this, we propose head attribution—a unified framework integrating component attribution and token-level information flow tracing—to systematically dissect how attention heads collaborate in visual question answering. Our analysis reveals that semantic content governs the selection of critical heads; information flow exhibits a hierarchical structure wherein role-relevant tokens receive stronger visual signals than background tokens. Moreover, we identify a small subset of high-contribution attention heads that play a decisive role in cross-modal information transfer. This work is the first to uncover the ordered, semantics-driven, and hierarchically interpretable nature of image-to-text information flow in LVLMs, establishing a new paradigm for model diagnosis and controllable editing.

Technology Category

Application Category

📝 Abstract
Large Vision-Language Models (LVLMs) answer visual questions by transferring information from images to text through a series of attention heads. While this image-to-text information flow is central to visual question answering, its underlying mechanism remains difficult to interpret due to the simultaneous operation of numerous attention heads. To address this challenge, we propose head attribution, a technique inspired by component attribution methods, to identify consistent patterns among attention heads that play a key role in information transfer. Using head attribution, we investigate how LVLMs rely on specific attention heads to identify and answer questions about the main object in an image. Our analysis reveals that a distinct subset of attention heads facilitates the image-to-text information flow. Remarkably, we find that the selection of these heads is governed by the semantic content of the input image rather than its visual appearance. We further examine the flow of information at the token level and discover that (1) text information first propagates to role-related tokens and the final token before receiving image information, and (2) image information is embedded in both object-related and background tokens. Our work provides evidence that image-to-text information flow follows a structured process, and that analysis at the attention-head level offers a promising direction toward understanding the mechanisms of LVLMs.
Problem

Research questions and friction points this paper is trying to address.

Interpreting how attention heads transfer image information to text in LVLMs
Identifying consistent patterns among attention heads for visual question answering
Understanding the structured process of image-to-text information flow mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Proposes head attribution technique for attention analysis
Identifies semantic-driven attention heads for information flow
Reveals structured token-level information propagation patterns
🔎 Similar Papers
No similar papers found.