🤖 AI Summary
Existing MLLM interpretability methods focus on cross-modal attribution while neglecting intra-modal token-level dynamic dependencies: isolated patch attribution in vision is constrained by local receptive fields, causing fragmented explanations; sequential token dependencies in text often induce spurious activations, degrading attribution fidelity. This paper proposes Multi-Scale Explanation Aggregation (MSEA) and Activation-Ranking Correlation (ARC), the first framework to systematically model fine-grained intra-modal interaction mechanisms in both vision and language. MSEA enhances intra-modal explanation consistency via multi-scale input aggregation, top-k prediction alignment, and dynamic suppression of irrelevant context. Evaluated across mainstream MLLMs and standard benchmarks, our approach significantly improves attribution coherence, accuracy, and fidelity—yielding more complete and robust cross-modal explanations.
📝 Abstract
Multimodal Large Language Models (MLLMs) have achieved remarkable success across diverse vision-language tasks, yet their internal decision-making mechanisms remain insufficiently understood. Existing interpretability research has primarily focused on cross-modal attribution, identifying which image regions the model attends to during output generation. However, these approaches often overlook intra-modal dependencies. In the visual modality, attributing importance to isolated image patches ignores spatial context due to limited receptive fields, resulting in fragmented and noisy explanations. In the textual modality, reliance on preceding tokens introduces spurious activations. Failing to effectively mitigate these interference compromises attribution fidelity. To address these limitations, we propose enhancing interpretability by leveraging intra-modal interaction. For the visual branch, we introduce extit{Multi-Scale Explanation Aggregation} (MSEA), which aggregates attributions over multi-scale inputs to dynamically adjust receptive fields, producing more holistic and spatially coherent visual explanations. For the textual branch, we propose extit{Activation Ranking Correlation} (ARC), which measures the relevance of contextual tokens to the current token via alignment of their top-$k$ prediction rankings. ARC leverages this relevance to suppress spurious activations from irrelevant contexts while preserving semantically coherent ones. Extensive experiments across state-of-the-art MLLMs and benchmark datasets demonstrate that our approach consistently outperforms existing interpretability methods, yielding more faithful and fine-grained explanations of model behavior.