Symmetry and Generalisation in Machine Learning

📅 2025-01-07
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the impact of symmetry—specifically invariance and equivariance—as an inductive bias on generalization in supervised learning. Focusing on regression tasks, it introduces a theoretical framework based on averaging operators and provides the first rigorous proof that, when symmetry is correctly specified, equivariant predictors achieve strictly lower expected test risk than non-equivariant ones. Leveraging group representation theory and orbit-space modeling, the work formalizes the orbit-representative perspective and extends it to the equivariant setting, deriving explicit bounds on risk reduction for random-design least squares and kernel ridge regression. Empirical results confirm that symmetry-aware modeling consistently improves generalization. The core contribution is establishing a quantitative, computable, and verifiable link between symmetry structure and generalization error, yielding guaranteed risk improvement under correct symmetry specification.

Technology Category

Application Category

📝 Abstract
This work is about understanding the impact of invariance and equivariance on generalisation in supervised learning. We use the perspective afforded by an averaging operator to show that for any predictor that is not equivariant, there is an equivariant predictor with strictly lower test risk on all regression problems where the equivariance is correctly specified. This constitutes a rigorous proof that symmetry, in the form of invariance or equivariance, is a useful inductive bias. We apply these ideas to equivariance and invariance in random design least squares and kernel ridge regression respectively. This allows us to specify the reduction in expected test risk in more concrete settings and express it in terms of properties of the group, the model and the data. Along the way, we give examples and additional results to demonstrate the utility of the averaging operator approach in analysing equivariant predictors. In addition, we adopt an alternative perspective and formalise the common intuition that learning with invariant models reduces to a problem in terms of orbit representatives. The formalism extends naturally to a similar intuition for equivariant models. We conclude by connecting the two perspectives and giving some ideas for future work.
Problem

Research questions and friction points this paper is trying to address.

Invariant Patterns
Variant Patterns
Predictive Model Performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pattern Recognition
Predictive Modeling
Rule Consistency
🔎 Similar Papers