chemtrain-deploy: A parallel and scalable framework for machine learning potentials in million-atom MD simulations

📅 2025-06-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing MLP software suffers from three key limitations: strong architectural coupling, poor integration with molecular dynamics (MD) engines (e.g., LAMMPS), and lack of multi-GPU parallelism. To address these, we propose the first model-agnostic, JAX-based MLP deployment framework. Our approach achieves deep decoupling between the LAMMPS plugin interface and JAX’s automatic differentiation and XLA compilation, enabling generalized encapsulation of semi-local interatomic potentials. We further introduce a novel MPI/GPU hybrid parallelization scheme coupled with force computation offloading, allowing arbitrary JAX-defined potentials to execute efficiently within LAMMPS. Evaluation on state-of-the-art models—including MACE, Allegro, and PaiNN—demonstrates near-linear weak scaling for million-atom MD simulations, over 40% reduction in per-step latency, and substantial performance gains over existing toolchains.

Technology Category

Application Category

📝 Abstract
Machine learning potentials (MLPs) have advanced rapidly and show great promise to transform molecular dynamics (MD) simulations. However, most existing software tools are tied to specific MLP architectures, lack integration with standard MD packages, or are not parallelizable across GPUs. To address these challenges, we present chemtrain-deploy, a framework that enables model-agnostic deployment of MLPs in LAMMPS. chemtrain-deploy supports any JAX-defined semi-local potential, allowing users to exploit the functionality of LAMMPS and perform large-scale MLP-based MD simulations on multiple GPUs. It achieves state-of-the-art efficiency and scales to systems containing millions of atoms. We validate its performance and scalability using graph neural network architectures, including MACE, Allegro, and PaiNN, applied to a variety of systems, such as liquid-vapor interfaces, crystalline materials, and solvated peptides. Our results highlight the practical utility of chemtrain-deploy for real-world, high-performance simulations and provide guidance for MLP architecture selection and future design.
Problem

Research questions and friction points this paper is trying to address.

Enables model-agnostic MLP deployment in LAMMPS
Supports large-scale MD simulations on multiple GPUs
Validates performance for million-atom systems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model-agnostic MLP deployment in LAMMPS
JAX-defined semi-local potential support
Multi-GPU scalable million-atom MD
🔎 Similar Papers
No similar papers found.
Paul Fuchs
Paul Fuchs
Multiscale Modeling of Fluid Materials, Technical University of Munich
molecular dynamicsmachine learning
Weilong Chen
Weilong Chen
Nanyang Technological University
Computer VisionPattern RecognitionMachine Learning
Stephan Thaler
Stephan Thaler
Valence Labs
Numerical SimulationMachine Learning
J
J. Zavadlav
Professorship of Multiscale Modeling of Fluid Materials, Department of Engineering Physics and Computation, TUM School of Engineering and Design, Technical University of Munich, Germany.; Atomistic Modeling Center (AMC), Munich Data Science Institute (MDSI), Technical University of Munich, Germany.