🤖 AI Summary
Existing machine learning interatomic potentials (MLIPs) suffer from insufficient accuracy in out-of-distribution (OOD) and few-shot regimes and lack reliable uncertainty quantification. To address these limitations, we propose Bayesian Learning Interatomic Potentials (BLIPs), a scalable, architecture-agnostic variational Bayesian framework that integrates adaptive variational dropout with equivariant message-passing networks. BLIPs enables high-fidelity prediction of energies and forces while yielding well-calibrated uncertainty estimates. The method supports end-to-end training and fine-tuning, significantly enhancing model generalization and trustworthiness. Experiments demonstrate that BLIPs consistently outperforms state-of-the-art MLIPs—including SchNet and DimeNet++—on data-scarce and OOD benchmarks. Crucially, its uncertainty outputs exhibit strong frequency calibration, ensuring statistically sound confidence intervals. By unifying probabilistic modeling with equivariant deep learning, BLIPs establishes a more robust and interpretable paradigm for interatomic potential construction in computational chemistry.
📝 Abstract
Machine Learning Interatomic Potentials (MLIPs) are becoming a central tool in simulation-based chemistry. However, like most deep learning models, MLIPs struggle to make accurate predictions on out-of-distribution data or when trained in a data-scarce regime, both common scenarios in simulation-based chemistry. Moreover, MLIPs do not provide uncertainty estimates by construction, which are fundamental to guide active learning pipelines and to ensure the accuracy of simulation results compared to quantum calculations. To address this shortcoming, we propose BLIPs: Bayesian Learned Interatomic Potentials. BLIP is a scalable, architecture-agnostic variational Bayesian framework for training or fine-tuning MLIPs, built on an adaptive version of Variational Dropout. BLIP delivers well-calibrated uncertainty estimates and minimal computational overhead for energy and forces prediction at inference time, while integrating seamlessly with (equivariant) message-passing architectures. Empirical results on simulation-based computational chemistry tasks demonstrate improved predictive accuracy with respect to standard MLIPs, and trustworthy uncertainty estimates, especially in data-scarse or heavy out-of-distribution regimes. Moreover, fine-tuning pretrained MLIPs with BLIP yields consistent performance gains and calibrated uncertainties.