On The Finetuning of MLIPs Through the Lens of Iterated Maps With BPTT

📅 2025-11-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional machine-learned interatomic potential (MLIP) training relies on high-fidelity ab initio force labels, incurring prohibitive data generation and computational costs—especially for structural relaxation tasks. This work proposes an end-to-end differentiable relaxation framework: structural relaxation is formulated as an iterative mapping process, and MLIP parameters are optimized directly via backpropagation through time (BPTT), using only the final relaxed structure’s prediction error as the supervisory signal—eliminating dependence on force labels. Crucially, the method fine-tunes the potential energy surface itself without altering the underlying relaxation algorithm or introducing auxiliary learnable modules. Evaluated across diverse material systems, it achieves ~50% average reduction in structural prediction error. Moreover, it exhibits strong robustness to hyperparameter variations and procedural changes, significantly enhancing generalizability and deployment efficiency.

Technology Category

Application Category

📝 Abstract
Vital to the creation of advanced materials is performing structural relaxations. Traditional approaches built on physics-derived first-principles calculations are computationally expensive, motivating the creation of machine-learning interatomic potentials (MLIPs). Traditional approaches to training MLIPs for structural relaxations involves training models to faithfully reproduce first-principles computed forces. We propose a fine-tuning method to be used on a pretrained MLIP in which we create a fully-differentiable end-to-end simulation loop that optimizes the predicted final structures directly. Trajectories are unrolled and gradients are tracked through the entire relaxation. We show that this method achieves substantial performance gains when applied to pretrained models, leading to a nearly $50%$ reduction in test error across the sample datasets. Interestingly, we show the process is robust to substantial variation in the relaxation setup, achieving negligibly different results across varied hyperparameter and procedural modifications. Experimental results indicate this is due to a ``preference'' of BPTT to modify the MLIP rather than the other trainable parameters. Of particular interest to practitioners is that this approach lowers the data requirements for producing an effective domain-specific MLIP, addressing a common bottleneck in practical deployment.
Problem

Research questions and friction points this paper is trying to address.

Finetunes MLIPs for structural relaxations via differentiable simulation loops
Reduces test error by 50% by optimizing predicted final structures directly
Lowers data requirements for domain-specific MLIPs using BPTT optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tunes pretrained MLIPs via differentiable simulation loops
Uses BPTT to optimize final structures directly
Reduces data needs for domain-specific MLIP deployment