Approximate equivariance via projection-based regularisation

๐Ÿ“… 2026-01-08
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing approximate equivariance methods rely on data augmentation, suffering from high sample complexity and inefficiency over continuous groups such as SO(3), while struggling to reconcile the true data distribution with imperfect symmetries. This work proposes the first projection-based regularization framework that orthogonally decomposes linear layers into equivariant and non-equivariant components. Rather than applying penalties pointwise, it operates at the operator level, leveraging group representation theory to efficiently compute non-equivariant contributions across the full group orbit in both spatial and spectral domains. The approach eliminates the need for data augmentation, substantially improving training efficiency and model performance, particularly in settings involving continuous symmetry groups.

Technology Category

Application Category

๐Ÿ“ Abstract
Equivariance is a powerful inductive bias in neural networks, improving generalisation and physical consistency. Recently, however, non-equivariant models have regained attention, due to their better runtime performance and imperfect symmetries that might arise in real-world applications. This has motivated the development of approximately equivariant models that strike a middle ground between respecting symmetries and fitting the data distribution. Existing approaches in this field usually apply sample-based regularisers which depend on data augmentation at training time, incurring a high sample complexity, in particular for continuous groups such as $SO(3)$. This work instead approaches approximate equivariance via a projection-based regulariser which leverages the orthogonal decomposition of linear layers into equivariant and non-equivariant components. In contrast to existing methods, this penalises non-equivariance at an operator level across the full group orbit, rather than point-wise. We present a mathematical framework for computing the non-equivariance penalty exactly and efficiently in both the spatial and spectral domain. In our experiments, our method consistently outperforms prior approximate equivariance approaches in both model performance and efficiency, achieving substantial runtime gains over sample-based regularisers.
Problem

Research questions and friction points this paper is trying to address.

approximate equivariance
neural networks
symmetry
regularisation
continuous groups
Innovation

Methods, ideas, or system contributions that make the work stand out.

approximate equivariance
projection-based regularisation
orthogonal decomposition
operator-level penalty
continuous group
๐Ÿ”Ž Similar Papers
No similar papers found.