Where MLLMs Attend and What They Rely On: Explaining Autoregressive Token Generation

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The visual dependency mechanism underlying multimodal large language models’ (MLLMs) generation remains opaque, hindering their interpretability and reliability. To address this, we propose EAGLE—a lightweight, black-box attribution framework that, for the first time, jointly models sufficiency and necessity. It employs greedy search to efficiently identify sparse, high-fidelity image regions critical for generation, while enabling fine-grained, modality-aware analysis (e.g., distinguishing visual vs. linguistic priors). Crucially, EAGLE requires only forward inference—no gradients or model fine-tuning. Extensive evaluation across multiple open-source MLLMs demonstrates that EAGLE significantly outperforms existing methods in attribution fidelity, visual region localization accuracy, and hallucination detection, while reducing GPU memory consumption by up to 60%. Its design balances theoretical rigor with practical deployability, offering a scalable solution for MLLM interpretability.

Technology Category

Application Category

📝 Abstract
Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in aligning visual inputs with natural language outputs. Yet, the extent to which generated tokens depend on visual modalities remains poorly understood, limiting interpretability and reliability. In this work, we present EAGLE, a lightweight black-box framework for explaining autoregressive token generation in MLLMs. EAGLE attributes any selected tokens to compact perceptual regions while quantifying the relative influence of language priors and perceptual evidence. The framework introduces an objective function that unifies sufficiency (insight score) and indispensability (necessity score), optimized via greedy search over sparsified image regions for faithful and efficient attribution. Beyond spatial attribution, EAGLE performs modality-aware analysis that disentangles what tokens rely on, providing fine-grained interpretability of model decisions. Extensive experiments across open-source MLLMs show that EAGLE consistently outperforms existing methods in faithfulness, localization, and hallucination diagnosis, while requiring substantially less GPU memory. These results highlight its effectiveness and practicality for advancing the interpretability of MLLMs. The code is available at https://github.com/RuoyuChen10/EAGLE.
Problem

Research questions and friction points this paper is trying to address.

Explains token generation dependencies on visual and language inputs
Quantifies influence of perceptual evidence versus language priors
Provides fine-grained interpretability for multimodal model decisions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight black-box framework for token attribution
Unifies sufficiency and indispensability via greedy search
Performs modality-aware analysis for fine-grained interpretability
🔎 Similar Papers
No similar papers found.