🤖 AI Summary
High computational cost impedes the widespread adoption of E(3)-equivariant foundation models in materials modeling. This work introduces Nequix—a lightweight E(3)-equivariant potential energy model—designed for high performance under extremely low compute budgets. Nequix achieves this through three key innovations: architectural simplification of NequIP, incorporation of equivariant RMS layer normalization, and adoption of the Muon optimizer. With only 0.7 million parameters and requiring just 500 A100-GPU hours for training (less than one-quarter of state-of-the-art methods), Nequix ranks third overall on the Matbench-Discovery and MDR Phonon leaderboards. Moreover, it delivers over 10× faster inference than current top-performing models. Implemented in JAX, Nequix ensures computational efficiency, full reproducibility, and hardware portability across modern accelerators.
📝 Abstract
Foundation models for materials modeling are advancing quickly, but their training remains expensive, often placing state-of-the-art methods out of reach for many research groups. We introduce Nequix, a compact E(3)-equivariant potential that pairs a simplified NequIP design with modern training practices, including equivariant root-mean-square layer normalization and the Muon optimizer, to retain accuracy while substantially reducing compute requirements. Built in JAX, Nequix has 700K parameters and was trained in 500 A100-GPU hours. On the Matbench-Discovery and MDR Phonon benchmarks, Nequix ranks third overall while requiring less than one quarter of the training cost of most other methods, and it delivers an order-of-magnitude faster inference speed than the current top-ranked model. We release model weights and fully reproducible codebase at https://github.com/atomicarchitects/nequix