🤖 AI Summary
This work addresses the problem of how machine learning models can effectively leverage prior symmetries to improve generalization. Methodologically, we propose a synergistic training framework integrating data augmentation and equivariance regularization. We theoretically establish, for the first time, that their joint application strictly induces group equivariance—overcoming limitations of relying solely on augmentation or hand-crafted equivariant architectures. Our approach incorporates group-action-based augmentation, an equivariance regularization loss, and Lie-group symmetry modeling, all embedded within the empirical risk minimization framework. Experiments on image and geometric learning tasks demonstrate that our method reduces equivariance error by over 98% and generalization error by 32%, without requiring custom equivariant network designs. This significantly enhances both the automatic acquisition and practical applicability of symmetry-induced inductive biases.
📝 Abstract
In many machine learning tasks, known symmetries can be used as an inductive bias to improve model performance. In this paper, we consider learning group equivariance through training with data augmentation. We summarize results from a previous paper of our own, and extend the results to show that equivariance of the trained model can be achieved through training on augmented data in tandem with regularization.