MatRIS: Toward Reliable and Efficient Pretrained Machine Learning Interaction Potentials

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes MatRIS, an invariant-based machine learning interatomic potential (MLIP) model that overcomes the high computational cost of conventional equivariant models, which rely on expensive higher-order tensor operations and struggle to efficiently capture high-dimensional atomic interactions in large-scale systems. By introducing a separable attention mechanism with linear complexity (O(N)), MatRIS enables, for the first time within an invariant framework, accurate and efficient modeling of three-body interactions. This approach breaks the longstanding trade-off between efficiency and accuracy inherent in equivariant models. Experimental results demonstrate that MatRIS achieves accuracy comparable to state-of-the-art equivariant models on benchmarks such as Matbench-Discovery (F1 = 0.847), while substantially reducing training overhead, thereby establishing the feasibility and competitiveness of high-performance invariant-based MLIPs.

Technology Category

Application Category

📝 Abstract
Foundation MLIPs demonstrate broad applicability across diverse material systems and have emerged as a powerful and transformative paradigm in chemical and computational materials science. Equivariant MLIPs achieve state-of-the-art accuracy in a wide range of benchmarks by incorporating equivariant inductive bias. However, the reliance on tensor products and high-degree representations makes them computationally costly. This raises a fundamental question: as quantum mechanical-based datasets continue to expand, can we develop a more compact model to thoroughly exploit high-dimensional atomic interactions? In this work, we present MatRIS (\textbf{Mat}erials \textbf{R}epresentation and \textbf{I}nteraction \textbf{S}imulation), an invariant MLIP that introduces attention-based modeling of three-body interactions. MatRIS leverages a novel separable attention mechanism with linear complexity $O(N)$, enabling both scalability and expressiveness. MatRIS delivers accuracy comparable to that of leading equivariant models on a wide range of popular benchmarks (Matbench-Discovery, MatPES, MDR phonon, Molecular dataset, etc). Taking Matbench-Discovery as an example, MatRIS achieves an F1 score of up to 0.847 and attains comparable accuracy at a lower training cost. The work indicates that our carefully designed invariant models can match or exceed the accuracy of equivariant models at a fraction of the cost, shedding light on the development of accurate and efficient MLIPs.
Problem

Research questions and friction points this paper is trying to address.

Machine Learning Interaction Potentials
Computational Efficiency
High-dimensional Atomic Interactions
Invariant Models
Scalability
Innovation

Methods, ideas, or system contributions that make the work stand out.

invariant MLIP
separable attention
three-body interactions
linear complexity
efficient materials modeling
🔎 Similar Papers
No similar papers found.
Y
Yuanchang Zhou
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Siyu Hu
Siyu Hu
Institute of Computing Technology, Chinese Academy of Sciences
AI4SHPC
X
Xiangyu Zhang
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Hongyu Wang
Hongyu Wang
Institute of Computing Technology, Chinese Academy of Sciences
Deep LearningNatural Language ProcessingComputer Vision
G
Guangming Tan
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
W
Weile Jia
State Key Lab of Processors, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences; School of Advanced Interdisciplinary Sciences, UCAS