Improving the Stability of GNN Force Field Models by Reducing Feature Correlation

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Graph neural network force fields (GNNFFs) suffer from instability during out-of-distribution (OOD) molecular dynamics (MD) simulations, limiting their long-term predictive reliability. Method: This work establishes, for the first time, a strong negative correlation between edge-feature interdependence and model instability; it proposes a feature decorrelation–based stabilization framework—introducing a decorrelation loss with dynamic coefficient scheduling and an empirical, quantifiable metric for long-term MD stability—requiring no architectural modification and adding only a lightweight regularization term. Contribution/Results: Evaluated on state-of-the-art GNNFFs (e.g., Allegro), the method extends OOD MD stability duration from 0.03 ps to 10 ps (>330× improvement) with <3% computational overhead. Its core contribution lies in theoretically linking edge-feature correlation to MD stability and delivering an efficient, generalizable, and interpretable stabilization paradigm for GNNFFs.

Technology Category

Application Category

📝 Abstract
Recently, Graph Neural Network based Force Field (GNNFF) models are widely used in Molecular Dynamics (MD) simulation, which is one of the most cost-effective means in semiconductor material research. However, even such models provide high accuracy in energy and force Mean Absolute Error (MAE) over trained (in-distribution) datasets, they often become unstable during long-time MD simulation when used for out-of-distribution datasets. In this paper, we propose a feature correlation based method for GNNFF models to enhance the stability of MD simulation. We reveal the negative relationship between feature correlation and the stability of GNNFF models, and design a loss function with a dynamic loss coefficient scheduler to reduce edge feature correlation that can be applied in general GNNFF training. We also propose an empirical metric to evaluate the stability in MD simulation. Experiments show our method can significantly improve stability for GNNFF models especially in out-of-distribution data with less than 3% computational overhead. For example, we can ensure the stable MD simulation time from 0.03ps to 10ps for Allegro model.
Problem

Research questions and friction points this paper is trying to address.

Enhancing stability in GNNFF models
Reducing feature correlation in MD simulations
Improving out-of-distribution dataset performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reduces feature correlation in GNNFF
Applies dynamic loss coefficient scheduler
Introduces empirical stability metric for MD
🔎 Similar Papers
No similar papers found.
Yujie Zeng
Yujie Zeng
Queen Mary University of London; University of Electronic Science and Technology of China
Network ScienceHigher-order NetworksComplex SystemsMachine Learning
W
Wenlong He
Samsung Research Institute China Xian, Xian, 710000, China
Ihor Vasyltsov
Ihor Vasyltsov
Samsung Electronics, South Korea
Jiaxin Wei
Jiaxin Wei
Technical University of Munich
3D visionobject perceptionSLAM
Y
Ying Zhang
Samsung Research Institute China Xian, Xian, 710000, China
L
Lin Chen
Samsung Research Institute China Xian, Xian, 710000, China
Y
Yuehua Dai
Samsung Research Institute China Xian, Xian, 710000, China