Universally Invariant Learning in Equivariant GNNs

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of designing equivariant graph neural networks (GNNs) that simultaneously achieve universal approximation, computational efficiency, and polynomial-time solvability. We propose an efficient and complete construction framework grounded in geometric graphs: by introducing canonical forms and full-rank steerable basis sets, we reduce high-order tensor operations to scalar function modeling—enabling universal approximation of arbitrary equivariant functions via linear combinations of a canonical scalar network and the basis set. Unlike conventional higher-order tensor-based methods (e.g., EGNN, TFN), our approach avoids exponential complexity, achieving polynomial-time algorithmic complexity. Experiments demonstrate state-of-the-art performance on molecular dynamics and equivariant regression tasks using only 2–3 layers, with up to 5.3× speedup and strong expressivity.

Technology Category

Application Category

📝 Abstract
Equivariant Graph Neural Networks (GNNs) have demonstrated significant success across various applications. To achieve completeness -- that is, the universal approximation property over the space of equivariant functions -- the network must effectively capture the intricate multi-body interactions among different nodes. Prior methods attain this via deeper architectures, augmented body orders, or increased degrees of steerable features, often at high computational cost and without polynomial-time solutions. In this work, we present a theoretically grounded framework for constructing complete equivariant GNNs that is both efficient and practical. We prove that a complete equivariant GNN can be achieved through two key components: 1) a complete scalar function, referred to as the canonical form of the geometric graph; and 2) a full-rank steerable basis set. Leveraging this finding, we propose an efficient algorithm for constructing complete equivariant GNNs based on two common models: EGNN and TFN. Empirical results demonstrate that our model demonstrates superior completeness and excellent performance with only a few layers, thereby significantly reducing computational overhead while maintaining strong practical efficacy.
Problem

Research questions and friction points this paper is trying to address.

Achieving universal approximation in equivariant GNNs efficiently
Reducing computational costs while maintaining completeness in GNNs
Constructing complete equivariant networks with minimal layers practically
Innovation

Methods, ideas, or system contributions that make the work stand out.

Complete scalar function as canonical geometric graph form
Full-rank steerable basis set for equivariant representation
Efficient algorithm enabling shallow complete equivariant GNNs
🔎 Similar Papers