Interpretability-Aware Vision Transformer

📅 2023-09-14
🏛️ arXiv.org
📈 Citations: 7
Influential: 0
📄 PDF
🤖 AI Summary
Existing ViT interpretability methods suffer from poor generalizability, reliance on post-hoc processing, and failure under suboptimal model training or when critical regions are overlooked. To address these limitations, we propose IA-ViT, the first end-to-end trainable Vision Transformer framework with *interpretability-aware* optimization. During training, IA-ViT jointly optimizes the feature extractor, predictor, and a lightweight single-head self-attention interpreter—designed to faithfully reconstruct the predictor’s decision behavior—thereby achieving task-agnostic, architecture-general *intrinsic interpretability*. We introduce a self-supervised explanation consistency loss to enforce tight coupling between prediction and explanation. Evaluated across multiple image classification benchmarks, IA-ViT simultaneously improves both classification accuracy and quantitative interpretability metrics (Faithfulness and Monotonicity). Visualizations further confirm strong semantic alignment between the model’s attended regions and human-understandable object parts.
📝 Abstract
Vision Transformers (ViTs) have become prominent models for solving various vision tasks. However, the interpretability of ViTs has not kept pace with their promising performance. While there has been a surge of interest in developing {it post hoc} solutions to explain ViTs' outputs, these methods do not generalize to different downstream tasks and various transformer architectures. Furthermore, if ViTs are not properly trained with the given data and do not prioritize the region of interest, the {it post hoc} methods would be less effective. Instead of developing another {it post hoc} approach, we introduce a novel training procedure that inherently enhances model interpretability. Our interpretability-aware ViT (IA-ViT) draws inspiration from a fresh insight: both the class patch and image patches consistently generate predicted distributions and attention maps. IA-ViT is composed of a feature extractor, a predictor, and an interpreter, which are trained jointly with an interpretability-aware training objective. Consequently, the interpreter simulates the behavior of the predictor and provides a faithful explanation through its single-head self-attention mechanism. Our comprehensive experimental results demonstrate the effectiveness of IA-ViT in several image classification tasks, with both qualitative and quantitative evaluations of model performance and interpretability. Source code is available from: https://github.com/qiangyao1988/IA-ViT.
Problem

Research questions and friction points this paper is trying to address.

Enhancing interpretability of Vision Transformers during training
Generalizing interpretability across tasks and architectures
Improving faithfulness of explanations via joint training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Interpretability-aware training procedure for ViTs
Joint training of feature extractor, predictor, interpreter
Single-head self-attention for faithful explanations
🔎 Similar Papers
No similar papers found.