Equivariant Evidential Deep Learning for Interatomic Potentials

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the e²IP framework to address the limitations of existing uncertainty quantification methods for machine learning interatomic potentials, which are either computationally expensive or lack sufficient performance and struggle to maintain statistical rotational consistency for vector-valued outputs such as atomic forces. e²IP extends evidential deep learning to vector outputs for the first time by constructing a rotationally equivariant 3×3 symmetric positive-definite covariance tensor, enabling joint modeling of atomic forces and their uncertainties in a single forward pass. The approach is compatible with arbitrary backbone networks and achieves efficient, physically consistent uncertainty quantification within a single model. On multiple molecular benchmarks, e²IP outperforms non-equivariant evidential baselines and ensemble methods in data efficiency, inference speed, and reliability of uncertainty estimates.

Technology Category

Application Category

📝 Abstract
Uncertainty quantification (UQ) is critical for assessing the reliability of machine learning interatomic potentials (MLIPs) in molecular dynamics (MD) simulations, identifying extrapolation regimes and enabling uncertainty-aware workflows such as active learning for training dataset construction. Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance. Evidential deep learning (EDL) provides a theoretically grounded single-model alternative that determines both aleatoric and epistemic uncertainty in a single forward pass. However, extending evidential formulations from scalar targets to vector-valued quantities such as atomic forces introduces substantial challenges, particularly in maintaining statistical self-consistency under rotational transformations. To address this, we propose \textit{Equivariant Evidential Deep Learning for Interatomic Potentials} ($\text{e}^2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly by representing uncertainty as a full $3\times3$ symmetric positive definite covariance tensor that transforms equivariantly under rotations. Experiments on diverse molecular benchmarks show that $\text{e}^2$IP provides a stronger accuracy-efficiency-reliability balance than the non-equivariant evidential baseline and the widely used ensemble method. It also achieves better data efficiency through the fully equivariant architecture while retaining single-model inference efficiency.
Problem

Research questions and friction points this paper is trying to address.

uncertainty quantification
interatomic potentials
equivariance
atomic forces
evidential deep learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Equivariant Deep Learning
Evidential Deep Learning
Uncertainty Quantification
Interatomic Potentials
Covariance Tensor
🔎 Similar Papers
No similar papers found.
Z
Zhongyao Wang
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China, Shanghai Artificial Intelligence Laboratory, Shanghai, China, Shanghai Innovation Institution, Shanghai, China
T
Taoyong Cui
Shanghai Artificial Intelligence Laboratory, Shanghai, China, The Chinese University of Hong Kong, Hong Kong SAR, China
J
Jiawen Zou
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China
S
Shufei Zhang
Shanghai Artificial Intelligence Laboratory, Shanghai, China
B
Bo Yan
College of Computer Science and Artificial Intelligence, Fudan University, Shanghai, China
W
Wanli Ouyang
Shanghai Artificial Intelligence Laboratory, Shanghai, China, The Chinese University of Hong Kong, Hong Kong SAR, China
Weimin Tan
Weimin Tan
Fudan University
computer visiondeep learningsaliency detectionsmall object detection and recognition
Mao Su
Mao Su
Shanghai AI Laboratory
PhysicsAI