π€ AI Summary
Current neural network interatomic potential models lack explicit physical constraints, resulting in low training efficiency and insufficient prediction fidelity. To address this, we propose the Ξ¦-Moduleβa plug-and-play, zero-overhead architectural component that for the first time embeds the discrete Poisson equation into a message-passing framework. By jointly representing electrostatic potential and charge density in the eigenbasis of the molecular graph Laplacian, it enables self-supervised learning of electrostatic interactions without additional labeled data. The method ensures strong physical interpretability and computational efficiency. On the OE62 dataset, it reduces energy prediction error by 4.5β17.8%; on the MD22 benchmark, it achieves state-of-the-art performance on 5 out of 14 tasks. Moreover, it significantly lowers memory consumption during training and reduces sensitivity to hyperparameters. This work establishes a new paradigm for physics-informed neural potential modeling.
π Abstract
Recent advances in neural network interatomic potentials have emerged as a promising research direction. However, popular deep learning models often lack auxiliary constraints grounded in physical laws, which could accelerate training and improve fidelity through physics-based regularization. In this work, we introduce $Phi$-Module, a universal plugin module that enforces Poisson's equation within the message-passing framework to learn electrostatic interactions in a self-supervised manner. Specifically, each atom-wise representation is encouraged to satisfy a discretized Poisson's equation, making it possible to acquire a potential $oldsymbol{phi}$ and a corresponding charge density $oldsymbol{
ho}$ linked to the learnable Laplacian eigenbasis coefficients of a given molecular graph. We then derive an electrostatic energy term, crucial for improved total energy predictions. This approach integrates seamlessly into any existing neural potential with insignificant computational overhead. Experiments on the OE62 and MD22 benchmarks confirm that models combined with $Phi$-Module achieve robust improvements over baseline counterparts. For OE62 error reduction ranges from 4.5% to 17.8%, and for MD22, baseline equipped with $Phi$-Module achieves best results on 5 out of 14 cases. Our results underscore how embedding a first-principles constraint in neural interatomic potentials can significantly improve performance while remaining hyperparameter-friendly, memory-efficient and lightweight in training. Code will be available at href{https://github.com/dunnolab/phi-module}{dunnolab/phi-module}.