🤖 AI Summary
To address the high data cost and low utilization efficiency in training atomic- and coarse-grained implicit-solvent neural network potentials, this work proposes an end-to-end differentiable deep potential joint optimization framework. The method integrates thermodynamic constraints—specifically, free energy consistency—directly into the automatic differentiation system, enabling physics-guided, concurrent training of potential energy, forces, and free energies. It synergistically combines JAX/TensorFlow-based automatic differentiation, statistical-physics-informed loss functions, multi-scale joint optimization of energies and forces, and Monte Carlo sampling-driven active learning. Evaluated across diverse molecular systems, the framework achieves DFT-level accuracy (mean absolute error < 1 meV/atom), improves training efficiency by a factor of three, and demonstrates significantly enhanced generalization compared to conventional fitting approaches.