🤖 AI Summary
This work addresses the challenge of enhancing interpretability in language and vision models by effectively leveraging information encoded in Transformer attention mechanisms. Existing XAI methods suffer from limitations in both local feature attribution and global concept-level analysis. To overcome these, we propose two novel approaches: (1) integrating attention weights into the Shapley value decomposition framework via an attention-weighted feature function for more precise local attribution; and (2) combining Concept Activation Vectors (CAVs) with directional derivatives to introduce an attention-guided concept sensitivity metric, enabling interpretable global semantic analysis. Our methodology unifies cooperative game theory and gradient-based analysis, ensuring theoretical rigor and practical feasibility. Extensive experiments across multiple standard benchmarks demonstrate significant improvements in explanation accuracy and consistency. The proposed framework provides a unified, scalable, and attention-enhanced XAI paradigm for Transformer-based models.
📝 Abstract
The attention mechanism lies at the core of the transformer architecture, providing an interpretable model-internal signal that has motivated a growing interest in attention-based model explanations. Although attention weights do not directly determine model outputs, they reflect patterns of token influence that can inform and complement established explainability techniques. This work studies the potential of utilising the information encoded in attention weights to provide meaningful model explanations by integrating them into explainable AI (XAI) frameworks that target fundamentally different aspects of model behaviour. To this end, we develop two novel explanation methods applicable to both natural language processing and computer vision tasks. The first integrates attention weights into the Shapley value decomposition by redefining the characteristic function in terms of pairwise token interactions via attention weights, thus adapting this widely used game-theoretic solution concept to provide attention-driven attributions for local explanations. The second incorporates attention weights into token-level directional derivatives defined through concept activation vectors to measure concept sensitivity for global explanations. Our empirical evaluations on standard benchmarks and in a comparison study with widely used explanation methods show that attention weights can be meaningfully incorporated into the studied XAI frameworks, highlighting their value in enriching transformer explainability.