🤖 AI Summary
Neural network group-equivariant operators (GEOs) lack a rigorous mathematical foundation for interpretability.
Method: We propose the first axiomatic framework for GEO interpretability, introducing a noncommutativity metric grounded in group representation theory and category theory to quantify distances between GEOs; integrating user-specified complexity preferences to formally define interpretability; and leveraging nonexpansive operator theory to ensure stability and generalization.
Contribution/Results: We establish the first axiomatized interpretability system for GEOs driven by Group-Equivariant Neural Equations Operators (GENEOs), transforming interpretability into measurable, tunable, and provable mathematical properties. We prove theoretical robustness under perturbations and empirically validate the framework on CNN-based image classification, demonstrating consistency and verifiability between local and global explanations. This work bridges abstract algebraic structure with practical interpretability requirements, enabling principled analysis of equivariant deep learning models.
📝 Abstract
This paper introduces a rigorous mathematical framework for neural network explainability, and more broadly for the explainability of equivariant operators called Group Equivariant Operators (GEOs) based on Group Equivariant Non-Expansive Operators (GENEOs) transformations. The central concept involves quantifying the distance between GEOs by measuring the non-commutativity of specific diagrams. Additionally, the paper proposes a definition of interpretability of GEOs according to a complexity measure that can be defined according to each user preferences. Moreover, we explore the formal properties of this framework and show how it can be applied in classical machine learning scenarios, like image classification with convolutional neural networks.