🤖 AI Summary
This work addresses the low inference efficiency of equivariant neural networks, which stems from computationally expensive tensor operations. We propose an efficient equivariant modeling framework based on local normalization that decouples rotational symmetry—enabling lightweight implementation of tensor field networks without compromising equivariance. The framework unifies diverse efficient equivariant architectures and facilitates seamless integration of equivariant representations into arbitrary message-passing networks. Theoretical analysis demonstrates a significant reduction in computational complexity. Empirical evaluation shows up to several-fold speedup in inference time on molecular and geometric learning tasks, while maintaining or even surpassing baseline models in predictive accuracy. To promote practical adoption, we release TensorFrames—a PyTorch Geometric–based open-source toolkit implementing the proposed framework.
📝 Abstract
Equivariant neural networks offer strong inductive biases for learning from molecular and geometric data but often rely on specialized, computationally expensive tensor operations. We present a framework to transfers existing tensor field networks into the more efficient local canonicalization paradigm, preserving equivariance while significantly improving the runtime. Within this framework, we systematically compare different equivariant representations in terms of theoretical complexity, empirical runtime, and predictive accuracy. We publish the tensor_frames package, a PyTorchGeometric based implementation for local canonicalization, that enables straightforward integration of equivariance into any standard message passing neural network.