DEX-AR: A Dynamic Explainability Method for Autoregressive Vision-Language Models

📅 2026-03-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing interpretability methods struggle to capture the complex multimodal interactions in autoregressive vision-language models during token-by-token generation. To address this, this work proposes a dynamic interpretability framework that introduces a dynamic attention head filtering mechanism to identify vision-sensitive attention heads and employs a sequence-level strategy to distinguish visually grounded tokens from purely linguistic ones. Building on the token-wise generation process, the method computes hierarchical gradients to produce both token-level and sequence-level 2D saliency maps that localize critical image regions. Experiments on ImageNet, VQAv2, and Pascal VOC demonstrate that the proposed approach significantly outperforms existing techniques in perturbation tests and segmentation-based evaluation metrics.

Technology Category

Application Category

📝 Abstract
As Vision-Language Models (VLMs) become increasingly sophisticated and widely used, it becomes more and more crucial to understand their decision-making process. Traditional explainability methods, designed for classification tasks, struggle with modern autoregressive VLMs due to their complex token-by-token generation process and intricate interactions between visual and textual modalities. We present DEX-AR (Dynamic Explainability for AutoRegressive models), a novel explainability method designed to address these challenges by generating both per-token and sequence-level 2D heatmaps highlighting image regions crucial for the model's textual responses. The proposed method offers to interpret autoregressive VLMs-including varying importance of layers and generated tokens-by computing layer-wise gradients with respect to attention maps during the token-by-token generation process. DEX-AR introduces two key innovations: a dynamic head filtering mechanism that identifies attention heads focused on visual information, and a sequence-level filtering approach that aggregates per-token explanations while distinguishing between visually-grounded and purely linguistic tokens. Our evaluation on ImageNet, VQAv2, and PascalVOC, shows a consistent improvement in both perturbation-based metrics, using a novel normalized perplexity measure, as well as segmentation-based metrics.
Problem

Research questions and friction points this paper is trying to address.

Vision-Language Models
Explainability
Autoregressive Models
Token-by-token Generation
Multimodal Interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic Explainability
Autoregressive Vision-Language Models
Attention Head Filtering
Token-level Attribution
Visually-grounded Explanation
🔎 Similar Papers
No similar papers found.