🤖 AI Summary
This work addresses the slow convergence and high communication overhead commonly encountered in training finite basis physics-informed neural networks (FBPINNs) by proposing a multi-preconditioned L-BFGS algorithm based on the nonlinear additive Schwarz method. Exploiting the intrinsic domain decomposition structure of FBPINNs, the approach constructs local quasi-Newton corrections in parallel across subdomains and optimally combines these updates through a low-dimensional subspace minimization problem. The resulting nonlinear multi-preconditioning mechanism effectively balances convergence rate, solution accuracy, and communication efficiency. Experimental results demonstrate that, compared to standard L-BFGS, the proposed method significantly accelerates convergence, improves model accuracy, and substantially reduces communication costs.
📝 Abstract
A multi-preconditioned LBFGS (MP-LBFGS) algorithm is introduced for training finite-basis physics-informed neural networks (FBPINNs). The algorithm is motivated by the nonlinear additive Schwarz method and exploits the domain-decomposition-inspired additive architecture of FBPINNs, in which local neural networks are defined on subdomains, thereby localizing the network representation. Parallel, subdomain-local quasi-Newton corrections are then constructed on the corresponding local parts of the architecture. A key feature is a novel nonlinear multi-preconditioning mechanism, in which subdomain corrections are optimally combined through the solution of a low-dimensional subspace minimization problem. Numerical experiments indicate that MP-LBFGS can improve convergence speed, as well as model accuracy over standard LBFGS while incurring lower communication overhead.