Drawback of Enforcing Equivariance and its Compensation via the Lens of Expressive Power

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work reveals a fundamental limitation imposed by equivariance constraints on the expressive power of two-layer ReLU neural networks: it constructs the first explicit counterexample demonstrating that enforcing equivariance strictly reduces their function representation capacity. Through geometric analysis—examining decision boundary hyperplanes and channel-wise weight vectors—and layer-wise equivariant modeling, the magnitude of this expressivity degradation is quantified. Crucially, the paper proves that modest architectural expansion—e.g., increasing channel width—is sufficient to fully recover the lost expressivity. The key contribution is that the compensated equivariant network retains the original expressive capability while exhibiting a provably lower hypothesis space complexity, yielding a tighter generalization bound. This result provides a theoretical foundation for equivariant neural network design and offers quantitative guidance on the capacity–generalization trade-off.

Technology Category

Application Category

📝 Abstract
Equivariant neural networks encode symmetry as an inductive bias and have achieved strong empirical performance in wide domains. However, their expressive power remains not well understood. Focusing on 2-layer ReLU networks, this paper investigates the impact of equivariance constraints on the expressivity of equivariant and layer-wise equivariant networks. By examining the boundary hyperplanes and the channel vectors of ReLU networks, we construct an example showing that equivariance constraints could strictly limit expressive power. However, we demonstrate that this drawback can be compensated via enlarging the model size. Furthermore, we show that despite a larger model size, the resulting architecture could still correspond to a hypothesis space with lower complexity, implying superior generalizability for equivariant networks.
Problem

Research questions and friction points this paper is trying to address.

Equivariant networks' expressive power limitations under symmetry constraints
Compensating reduced expressivity by increasing model size
Achieving lower complexity hypothesis space for better generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariance constraints limit ReLU network expressivity
Enlarging model size compensates for expressivity loss
Larger equivariant networks maintain lower hypothesis complexity
🔎 Similar Papers
No similar papers found.