On Universality Classes of Equivariant Networks

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the universality—i.e., uniform approximation capability—of equivariant neural networks, noting that prior work predominantly characterizes separation power, while systematic analysis of function approximation under symmetry constraints remains lacking. Method: Leveraging group representation theory, invariant theory, and Weisfeiler–Leman analysis, the authors formally define the notion of “equivariant universal classes” and derive group-structural criteria for universality in shallow invariant/equivariant networks. Contribution/Results: They prove that separation power does not imply universality: models with identical separation capacity may exhibit markedly different approximation performance. The work establishes sufficient conditions for non-universality and necessary-and-sufficient conditions for universality under separation constraints. Crucially, it reveals fundamental approximation limitations of standard shallow equivariant architectures under key symmetries—including permutation groups—thereby exposing intrinsic expressivity bottlenecks in widely used designs.

Technology Category

Application Category

📝 Abstract
Equivariant neural networks provide a principled framework for incorporating symmetry into learning architectures and have been extensively analyzed through the lens of their separation power, that is, the ability to distinguish inputs modulo symmetry. This notion plays a central role in settings such as graph learning, where it is often formalized via the Weisfeiler-Leman hierarchy. In contrast, the universality of equivariant models-their capacity to approximate target functions-remains comparatively underexplored. In this work, we investigate the approximation power of equivariant neural networks beyond separation constraints. We show that separation power does not fully capture expressivity: models with identical separation power may differ in their approximation ability. To demonstrate this, we characterize the universality classes of shallow invariant networks, providing a general framework for understanding which functions these architectures can approximate. Since equivariant models reduce to invariant ones under projection, this analysis yields sufficient conditions under which shallow equivariant networks fail to be universal. Conversely, we identify settings where shallow models do achieve separation-constrained universality. These positive results, however, depend critically on structural properties of the symmetry group, such as the existence of adequate normal subgroups, which may not hold in important cases like permutation symmetry.
Problem

Research questions and friction points this paper is trying to address.

Investigates approximation power of equivariant neural networks beyond separation constraints
Characterizes universality classes of shallow invariant networks for function approximation
Identifies conditions where shallow equivariant networks fail or achieve universality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant networks analyze separation and universality
Shallow invariant networks' universality classes characterized
Structural group properties critical for universality
🔎 Similar Papers
No similar papers found.