Keypoint Counting Classifiers: Turning Vision Transformers into Self-Explainable Models Without Training

πŸ“… 2025-12-19
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the lack of interpretability in Vision Transformers (ViTs), this paper proposes Keypoint Counting Classifiers (KCCs), the first zero-training self-explaining framework for ViTs: it requires no fine-tuning, introduces no additional parameters, and instead extracts semantically meaningful keypoints solely by analyzing internal attention maps of pre-trained ViTs. Classification is performed via cross-image keypoint matching and counting. The method inherently supports intuitive, human-readable visualizations; its decision process is semantically aligned and fully traceable, significantly enhancing human-AI collaboration trustworthiness. Extensive experiments across multiple benchmark datasets demonstrate that KCCs consistently outperform existing self-explaining baselines in both explanation quality and classification accuracy. By offering an efficient, lightweight, and plug-and-play interpretability enhancement paradigm, KCCs provide a practical solution for general-purpose vision foundation models.

Technology Category

Application Category

πŸ“ Abstract
Current approaches for designing self-explainable models (SEMs) require complicated training procedures and specific architectures which makes them impractical. With the advance of general purpose foundation models based on Vision Transformers (ViTs), this impracticability becomes even more problematic. Therefore, new methods are necessary to provide transparency and reliability to ViT-based foundation models. In this work, we present a new method for turning any well-trained ViT-based model into a SEM without retraining, which we call Keypoint Counting Classifiers (KCCs). Recent works have shown that ViTs can automatically identify matching keypoints between images with high precision, and we build on these results to create an easily interpretable decision process that is inherently visualizable in the input. We perform an extensive evaluation which show that KCCs improve the human-machine communication compared to recent baselines. We believe that KCCs constitute an important step towards making ViT-based foundation models more transparent and reliable.
Problem

Research questions and friction points this paper is trying to address.

Turning Vision Transformers into self-explainable models without retraining.
Providing transparency and reliability to ViT-based foundation models.
Creating an interpretable, visualizable decision process for ViTs.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transforms Vision Transformers into self-explainable models without retraining
Uses keypoint matching between images for interpretable decision processes
Provides visualizable explanations directly in the input image space
πŸ”Ž Similar Papers
No similar papers found.
K
Kristoffer WickstrΓΈm
Department of Physics and Technology, UiT The Arctic University of Norway
T
Teresa Dorszewski
Department of Applied Mathematics and Computer Science, Technical University of Denmark
S
Siyan Chen
Department of Applied Mathematics and Computer Science, Technical University of Denmark
M
Michael Kampffmeyer
Department of Physics and Technology, UiT The Arctic University of Norway
Elisabeth Wetzer
Elisabeth Wetzer
UiT The Arctic University of Norway
Robert Jenssen
Robert Jenssen
Visual Intelligence, UiT The Arctic University of Norway & Norw. Comp. Center & P1 Centre AI, UCPH
Machine learninginformation theoretic learningkernel methodsdeep learninghealth data analytics