Implicit Bias and Invariance: How Hopfield Networks Efficiently Learn Graph Orbits

📅 2025-12-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how Hopfield networks implicitly learn invariant representations of graph isomorphism classes from few random graph samples. Methodologically, we first prove that any graph isomorphism class can be embedded into a three-dimensional invariant subspace under the action of the permutation group; we then introduce Minimum Energy Flow (MEF) gradient descent, revealing its implicit bias toward norm efficiency, and derive a polynomial upper bound on sample complexity based on this bias. Theoretically, network parameters asymptotically converge to this invariant subspace as sample size increases; empirically, the mechanism enables efficient isomorphism class inference and strong generalization. Our core contributions are: (i) establishing the first theoretical framework for implicit invariant learning in Hopfield networks; (ii) uncovering an intrinsic unification between implicit bias—specifically norm-efficient optimization—and group invariance; and (iii) providing a novel principle for few-shot learning on graph-structured data.

Technology Category

Application Category

📝 Abstract
Many learning problems involve symmetries, and while invariance can be built into neural architectures, it can also emerge implicitly when training on group-structured data. We study this phenomenon in classical Hopfield networks and show they can infer the full isomorphism class of a graph from a small random sample. Our results reveal that: (i) graph isomorphism classes can be represented within a three-dimensional invariant subspace, (ii) using gradient descent to minimize energy flow (MEF) has an implicit bias toward norm-efficient solutions, which underpins a polynomial sample complexity bound for learning isomorphism classes, and (iii) across multiple learning rules, parameters converge toward the invariant subspace as sample sizes grow. Together, these findings highlight a unifying mechanism for generalization in Hopfield networks: a bias toward norm efficiency in learning drives the emergence of approximate invariance under group-structured data.
Problem

Research questions and friction points this paper is trying to address.

Studies implicit bias and invariance in Hopfield networks learning graph isomorphism classes.
Analyzes how gradient descent minimizes energy flow for norm-efficient solutions.
Explores convergence toward invariant subspaces with increasing sample sizes.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hopfield networks learn graph isomorphism via implicit bias
Gradient descent minimizes energy flow for norm-efficient solutions
Parameters converge to invariant subspace with increasing samples
🔎 Similar Papers
No similar papers found.
Michael Murray
Michael Murray
University of Washington
RoboticsComputer VisionNatural Language Processing
T
Tenzin Chan
Department of Mathematics, UCLA
K
Kedar Karhadkar
Department of Mathematics, UCLA
C
Christopher J. Hillar
Algebraic New Theory AI