🤖 AI Summary
This work addresses the critical lack of geometric interpretability and equivariance in neural networks by proposing a unified geometric modeling paradigm based on non-compact symmetric spaces $U/H$. Methodologically, it introduces the first systematic construction of Cartan neural networks endowed with Lie group symmetry: layers are defined on symmetric spaces, inter-layer mappings strictly satisfy group-action equivariance, and nonlinear transformations are realized via Cartan decomposition to preserve geometric structure. Theoretically, it establishes intrinsic connections between network architecture and the curvature, geodesic geometry, and isometric group representations of the underlying symmetric space. Experimentally, the model achieves competitive performance while exhibiting explicit differential-geometric semantics—such as geodesic distance preservation and curvature-aware activation functions. This work provides the first rigorous equivariant framework for geometric deep learning on non-compact symmetric spaces.
📝 Abstract
Recent work has identified non-compact symmetric spaces U/H as a promising class of homogeneous manifolds to develop a geometrically consistent theory of neural networks. An initial implementation of these concepts has been presented in a twin paper under the moniker of Cartan Neural Networks, showing both the feasibility and the performance of these geometric concepts in a machine learning context. The current paper expands on the mathematical structures underpinning Cartan Neural Networks, detailing the geometric properties of the layers and how the maps between layers interact with such structures to make Cartan Neural Networks covariant and geometrically interpretable. Together, these twin papers constitute a first step towards a fully geometrically interpretable theory of neural networks exploiting group-theoretic structures