π€ AI Summary
To address the lack of interpretability in Vision Transformers (ViTs), this paper proposes Keypoint Counting Classifiers (KCCs), the first zero-training self-explaining framework for ViTs: it requires no fine-tuning, introduces no additional parameters, and instead extracts semantically meaningful keypoints solely by analyzing internal attention maps of pre-trained ViTs. Classification is performed via cross-image keypoint matching and counting. The method inherently supports intuitive, human-readable visualizations; its decision process is semantically aligned and fully traceable, significantly enhancing human-AI collaboration trustworthiness. Extensive experiments across multiple benchmark datasets demonstrate that KCCs consistently outperform existing self-explaining baselines in both explanation quality and classification accuracy. By offering an efficient, lightweight, and plug-and-play interpretability enhancement paradigm, KCCs provide a practical solution for general-purpose vision foundation models.
π Abstract
Current approaches for designing self-explainable models (SEMs) require complicated training procedures and specific architectures which makes them impractical. With the advance of general purpose foundation models based on Vision Transformers (ViTs), this impracticability becomes even more problematic. Therefore, new methods are necessary to provide transparency and reliability to ViT-based foundation models. In this work, we present a new method for turning any well-trained ViT-based model into a SEM without retraining, which we call Keypoint Counting Classifiers (KCCs). Recent works have shown that ViTs can automatically identify matching keypoints between images with high precision, and we build on these results to create an easily interpretable decision process that is inherently visualizable in the input. We perform an extensive evaluation which show that KCCs improve the human-machine communication compared to recent baselines. We believe that KCCs constitute an important step towards making ViT-based foundation models more transparent and reliable.