🤖 AI Summary
This work proposes the e²IP framework to address the limitations of existing uncertainty quantification methods for machine learning interatomic potentials, which are either computationally expensive or lack sufficient performance and struggle to maintain statistical rotational consistency for vector-valued outputs such as atomic forces. e²IP extends evidential deep learning to vector outputs for the first time by constructing a rotationally equivariant 3×3 symmetric positive-definite covariance tensor, enabling joint modeling of atomic forces and their uncertainties in a single forward pass. The approach is compatible with arbitrary backbone networks and achieves efficient, physically consistent uncertainty quantification within a single model. On multiple molecular benchmarks, e²IP outperforms non-equivariant evidential baselines and ensemble methods in data efficiency, inference speed, and reliability of uncertainty estimates.
📝 Abstract
Uncertainty quantification (UQ) is critical for assessing the reliability of machine learning interatomic potentials (MLIPs) in molecular dynamics (MD) simulations, identifying extrapolation regimes and enabling uncertainty-aware workflows such as active learning for training dataset construction. Existing UQ approaches for MLIPs are often limited by high computational cost or suboptimal performance. Evidential deep learning (EDL) provides a theoretically grounded single-model alternative that determines both aleatoric and epistemic uncertainty in a single forward pass. However, extending evidential formulations from scalar targets to vector-valued quantities such as atomic forces introduces substantial challenges, particularly in maintaining statistical self-consistency under rotational transformations. To address this, we propose \textit{Equivariant Evidential Deep Learning for Interatomic Potentials} ($\text{e}^2$IP), a backbone-agnostic framework that models atomic forces and their uncertainty jointly by representing uncertainty as a full $3\times3$ symmetric positive definite covariance tensor that transforms equivariantly under rotations. Experiments on diverse molecular benchmarks show that $\text{e}^2$IP provides a stronger accuracy-efficiency-reliability balance than the non-equivariant evidential baseline and the widely used ensemble method. It also achieves better data efficiency through the fully equivariant architecture while retaining single-model inference efficiency.