π€ AI Summary
Existing methods struggle to disentangle information loss from recoding in neural network representations under geometric transformations in unsupervised settings and lack a systematic framework to analyze the evolution of equivariance and invariance. This work proposes SEIS, a subspace-based metric framework that models the subspace structure of feature responses under geometric transformations, enabling layer-wise quantification of equivariance and invariance without labels or explicit knowledge of the transformations. SEIS is the first method to achieve decoupled analysis of these properties in an unsupervised manner, revealing that early layers in classification networks are predominantly equivariant while deeper layers become increasingly invariant; data augmentation enhances invariance without sacrificing equivariance; and multitask learning combined with skip connections jointly strengthens both properties. Experiments demonstrate SEISβs ability to accurately recover known transformation structures.
π Abstract
Understanding how neural representations respond to geometric transformations is essential for evaluating whether learned features preserve meaningful spatial structure. Existing approaches primarily assess robustness by comparing model outputs under transformed inputs, offering limited insight into how geometric information is organized within internal representations and failing to distinguish between information loss and re-encoding. In this work, we introduce SEIS (Subspace-based Equivariance and Invariance Scores), a subspace metric for analyzing layer-wise feature representations under geometric transformations, disentangling equivariance from invariance without requiring labels or explicit knowledge of the transformation. Synthetic validation confirms that SEIS correctly recovers known transformations. Applied to trained classification networks, SEIS reveals a transition from equivariance in early layers to invariance in deeper layers, and that data augmentation increases invariance while preserving equivariance. We further show that multi-task learning induces synergistic gains in both properties at the shared encoder, and skip connections restore equivariance lost during decoding.