Facet: highly efficient E(3)-equivariant networks for interatomic potentials

📅 2025-09-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Computational materials discovery is hindered by the high cost of ab initio calculations, while existing E(3)-equivariant graph neural networks—though geometrically symmetric—suffer from prohibitive computational and memory overhead due to high-order tensor operations across multiple harmonics. This work proposes an efficient E(3)-equivariant neural network: it replaces MLPs for interatomic distance modeling with spline-based functions, introduces a general equivariant layer based on spherical grid projection, and integrates spherical harmonic encoding with lightweight MLPs. On the MPTrj dataset, the model achieves state-of-the-art accuracy with less than 10% of the training cost and minimal parameter count. It accelerates crystal relaxation by 2× over MACE and speeds up large-scale foundation model training by over 10×. The core contribution lies in simultaneously ensuring strict E(3) equivariance, high representational capacity, and substantial computational efficiency gains.

Technology Category

Application Category

📝 Abstract
Computational materials discovery is limited by the high cost of first-principles calculations. Machine learning (ML) potentials that predict energies from crystal structures are promising, but existing methods face computational bottlenecks. Steerable graph neural networks (GNNs) encode geometry with spherical harmonics, respecting atomic symmetries -- permutation, rotation, and translation -- for physically realistic predictions. Yet maintaining equivariance is difficult: activation functions must be modified, and each layer must handle multiple data types for different harmonic orders. We present Facet, a GNN architecture for efficient ML potentials, developed through systematic analysis of steerable GNNs. Our innovations include replacing expensive multi-layer perceptrons (MLPs) for interatomic distances with splines, which match performance while cutting computational and memory demands. We also introduce a general-purpose equivariant layer that mixes node information via spherical grid projection followed by standard MLPs -- faster than tensor products and more expressive than linear or gate layers. On the MPTrj dataset, Facet matches leading models with far fewer parameters and under 10% of their training compute. On a crystal relaxation task, it runs twice as fast as MACE models. We further show SevenNet-0's parameters can be reduced by over 25% with no accuracy loss. These techniques enable more than 10x faster training of large-scale foundation models for ML potentials, potentially reshaping computational materials discovery.
Problem

Research questions and friction points this paper is trying to address.

Reducing computational cost of first-principles calculations
Overcoming bottlenecks in machine learning interatomic potentials
Maintaining E(3)-equivariance efficiently in graph neural networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Spline-based distance modeling for computational efficiency
Equivariant layer with spherical grid projection
Reduced parameters and faster training for potentials
🔎 Similar Papers
No similar papers found.