On Universality of Deep Equivariant Networks

๐Ÿ“… 2025-10-17
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing universality theories for equivariant neural networks suffer from the curse of dimensionality due to reliance on regular representations, or are restricted to invariant settings and specific architectures. Method: We depart from conventional separability frameworks and introduce the novel notion of *element-wise separability*, integrating group representation theory, function approximation theory, and architectural analysis to ensure feasibility in high-dimensional latent spaces. Contribution/Results: We establish the first universality theorem for deep equivariant networks: under mild separation constraints, arbitrary-precision approximation is achievable over element-wise separable functions provided the network is sufficiently deep or equipped with a fully connected readout layer. Our result unifies and generalizes prior work, systematically identifying depth and the readout layerโ€”as opposed to width aloneโ€”as the decisive mechanisms for equivariant universality, thereby filling a fundamental theoretical gap in the field.

Technology Category

Application Category

๐Ÿ“ Abstract
Universality results for equivariant neural networks remain rare. Those that do exist typically hold only in restrictive settings: either they rely on regular or higher-order tensor representations, leading to impractically high-dimensional hidden spaces, or they target specialized architectures, often confined to the invariant setting. This work develops a more general account. For invariant networks, we establish a universality theorem under separation constraints, showing that the addition of a fully connected readout layer secures approximation within the class of separation-constrained continuous functions. For equivariant networks, where results are even scarcer, we demonstrate that standard separability notions are inadequate and introduce the sharper criterion of $ extit{entry-wise separability}$. We show that with sufficient depth or with the addition of appropriate readout layers, equivariant networks attain universality within the entry-wise separable regime. Together with prior results showing the failure of universality for shallow models, our findings identify depth and readout layers as a decisive mechanism for universality, additionally offering a unified perspective that subsumes and extends earlier specialized results.
Problem

Research questions and friction points this paper is trying to address.

Establishing universality for equivariant networks under separation constraints
Introducing entry-wise separability for equivariant network universality
Identifying depth and readout layers as key universality mechanisms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep networks achieve universality via depth
Entry-wise separability enables equivariant approximation
Readout layers secure invariant function approximation
๐Ÿ”Ž Similar Papers
No similar papers found.