Post-hoc Self-explanation of CNNs

📅 2026-03-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Standard convolutional neural networks lack semantically coherent interpretability due to their internal prototypes’ inability to accurately represent data. This work proposes a unified k-means posterior interpretation framework that replaces the final linear layer of a CNN with a k-means classifier and leverages shallow intermediate feature activations—such as those from the B234 blocks of ResNet34—to generate concept-based explanation maps without requiring gradient computation. By exploiting the spatial consistency inherent in convolutional receptive fields, the method substantially enhances the semantic fidelity of explanations while incurring only a minor degradation in predictive performance, thereby achieving synergistic self-explainability across the encoder, classifier, and feature activation components.
📝 Abstract
Although standard Convolutional Neural Networks (CNNs) can be mathematically reinterpreted as Self-Explainable Models (SEMs), their built-in prototypes do not on their own accurately represent the data. Replacing the final linear layer with a $k$-means-based classifier addresses this limitation without compromising performance. This work introduces a common formalization of $k$-means-based post-hoc explanations for the classifier, the encoder's final output (B4), and combinations of intermediate feature activations. The latter approach leverages the spatial consistency of convolutional receptive fields to generate concept-based explanation maps, which are supported by gradient-free feature attribution maps. Empirical evaluation with a ResNet34 shows that using shallower, less compressed feature activations, such as those from the last three blocks (B234), results in a trade-off between semantic fidelity and a slight reduction in predictive performance.
Problem

Research questions and friction points this paper is trying to address.

Self-Explainable Models
Convolutional Neural Networks
Post-hoc Explanation
Feature Activations
Interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

post-hoc explanation
self-explainable models
k-means classifier
concept-based explanation
feature attribution
🔎 Similar Papers
No similar papers found.