🤖 AI Summary
This work addresses the semantic ambiguity in traditional CNNs caused by unordered operations, which undermines their intrinsic ability to attribute features to class predictions. To resolve this, the authors propose Feature-Align CNN (FA-CNN), an end-to-end architecture that preserves pixel-to-logit ordering throughout the network via order-preserving damped skip connections and a global average pooling classification head. Theoretically, the penultimate-layer feature maps of FA-CNN are equivalent to Grad-CAM heatmaps, with features progressively evolving across depth. Empirical results demonstrate that FA-CNN achieves competitive performance on standard image classification benchmarks, while its raw feature maps significantly outperform both Grad-CAM and permutation-based baselines in pixel-ablation interpretability tasks, thereby validating its inherent attribution capability.
📝 Abstract
We present Feature-Align CNN (FA-CNN), a prototype CNN architecture with intrinsic class attribution through end-to-end feature alignment. Our intuition is that the use of unordered operations such as Linear and Conv2D layers cause unnecessary shuffling and mixing of semantic concepts, thereby making raw feature maps difficult to understand. We introduce two new order preserving layers, the dampened skip connection, and the global average pooling classifier head. These layers force the model to maintain an end-to-end feature alignment from the raw input pixels all the way to final class logits. This end-to-end alignment enhances the interpretability of the model by allowing the raw feature maps to intrinsically exhibit class attribution. We prove theoretically that FA-CNN penultimate feature maps are identical to Grad-CAM saliency maps. Moreover, we prove that these feature maps slowly morph layer-by-layer over network depth, showing the evolution of features through network depth toward penultimate class activations. FA-CNN performs well on benchmark image classification datasets. Moreover, we compare the averaged FA-CNN raw feature maps against Grad-CAM and permutation methods in a percent pixels removed interpretability task. We conclude this work with a discussion and future, including limitations and extensions toward hybrid models.