T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers

📅 2024-03-07
🏛️ IEEE Access
📈 Citations: 6
Influential: 0
📄 PDF
🤖 AI Summary
Weak interpretability of deep image classification models (e.g., CNNs and ViTs) and their reliance on computationally expensive perturbation-based methods motivate this work. We propose a lightweight, plug-and-play, end-to-end trainable attention mechanism that requires no backbone modification and generates high-fidelity saliency maps via a single forward pass. Our approach introduces a learnable attention module jointly optimized with classification loss and an explanation consistency loss—enabling, for the first time, unified interpretability across both CNNs and ViTs. Extensive evaluation on ImageNet with VGG-16, ResNet-50, and ViT-B-16 demonstrates state-of-the-art performance across multiple interpretability metrics. The generated explanations match or surpass those of leading perturbation-based methods in quality, while reducing inference overhead by approximately 100×.

Technology Category

Application Category

📝 Abstract
The development and adoption of Vision Transformers and other deep-learning architectures for image classification tasks has been rapid. However, the “black box” nature of neural networks is a barrier to adoption in applications where explainability is essential. While some techniques for generating explanations have been proposed, primarily for Convolutional Neural Networks, adapting such techniques to the new paradigm of Vision Transformers is non-trivial. This paper presents T-TAME, Transformer-compatible Trainable Attention Mechanism for Explanations (https://github.com/IDT-ITI/T-TAME), a general methodology for explaining deep neural networks used in image classification tasks. The proposed architecture and training technique can be easily applied to any convolutional or Vision Transformer-like neural network, using a streamlined training approach. After training, explanation maps can be computed in a single forward pass; these explanation maps are comparable to or outperform the outputs of computationally expensive perturbation-based explainability techniques, achieving SOTA performance. We apply T-TAME to three popular deep learning classifier architectures, VGG-16, ResNet-50, and ViT-B-16, trained on the ImageNet dataset, and we demonstrate improvements over existing state-of-the-art explainability methods. A detailed analysis of the results and an ablation study provide insights into how the T-TAME design choices affect the quality of the generated explanation maps.
Problem

Research questions and friction points this paper is trying to address.

Explaining Vision Transformers and CNNs for image classification
Overcoming black-box nature of neural networks for explainability
Generating efficient, high-quality explanation maps with T-TAME
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trainable attention mechanism for explainability
Compatible with CNNs and Vision Transformers
Generates explanation maps in one forward pass
🔎 Similar Papers
No similar papers found.
M
Mariano V. Ntrougkas
Centre for Research and Technology Hellas (CERTH) / Information Technologies Institute (ITI)
N
Nikolaos Gkalelis
Centre for Research and Technology Hellas (CERTH) / Information Technologies Institute (ITI)
V
V. Mezaris
Centre for Research and Technology Hellas (CERTH) / Information Technologies Institute (ITI)