A Unified Framework to Enforce, Discover, and Promote Symmetry in Machine Learning

📅 2023-11-01
🏛️ arXiv.org
📈 Citations: 12
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the unified modeling of symmetries in machine learning. It proposes a framework grounded in differential geometry and convex optimization to (1) enforce known symmetries, (2) automatically discover unknown symmetries in models or data, and (3) actively induce symmetry breaking via user-specified candidate groups. The core contribution is the first formulation of symmetry imposition and discovery as dual linear-algebraic tasks, leveraging the Lie derivative to characterize fiberwise linear Lie group actions on vector bundles, and employing nuclear-norm relaxation to construct convex regularization terms. The method is broadly applicable to neural networks, dynamical system discovery, basis-function regression, and neural operators. Empirically, it significantly improves generalization performance and parameter efficiency—particularly in low-data regimes—while preserving geometric structure and interpretability.
📝 Abstract
Symmetry is present throughout nature and continues to play an increasingly central role in physics and machine learning. Fundamental symmetries, such as Poincar'{e} invariance, allow physical laws discovered in laboratories on Earth to be extrapolated to the farthest reaches of the universe. Symmetry is essential to achieving this extrapolatory power in machine learning applications. For example, translation invariance in image classification allows models with fewer parameters, such as convolutional neural networks, to be trained on smaller data sets and achieve state-of-the-art performance. In this paper, we provide a unifying theoretical and methodological framework for incorporating symmetry into machine learning models in three ways: 1. enforcing known symmetry when training a model; 2. discovering unknown symmetries of a given model or data set; and 3. promoting symmetry during training by learning a model that breaks symmetries within a user-specified group of candidates when there is sufficient evidence in the data. We show that these tasks can be cast within a common mathematical framework whose central object is the Lie derivative associated with fiber-linear Lie group actions on vector bundles. We extend and unify several existing results by showing that enforcing and discovering symmetry are linear-algebraic tasks that are dual with respect to the bilinear structure of the Lie derivative. We also propose a novel way to promote symmetry by introducing a class of convex regularization functions based on the Lie derivative and nuclear norm relaxation to penalize symmetry breaking during training of machine learning models. We explain how these ideas can be applied to a wide range of machine learning models including basis function regression, dynamical systems discovery, neural networks, and neural operators acting on fields.
Problem

Research questions and friction points this paper is trying to address.

Enforcing known symmetry in machine learning models
Discovering unknown symmetries in models or datasets
Promoting symmetry via convex regularization techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enforce known symmetry in model training
Discover unknown symmetries in data
Promote symmetry via convex regularization
🔎 Similar Papers
No similar papers found.
S
Samuel E. Otto
AI Institute in Dynamic Systems, University of Washington, Seattle, WA 98195-4322, USA
N
Nicholas Zolman
AI Institute in Dynamic Systems, University of Washington, Seattle, WA 98195-4322, USA
J
J. Kutz
AI Institute in Dynamic Systems, University of Washington, Seattle, WA 98195-4322, USA
S
S. Brunton
AI Institute in Dynamic Systems, University of Washington, Seattle, WA 98195-4322, USA