Curvature-Aware Optimization for High-Accuracy Physics-Informed Neural Networks

📅 2026-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the slow convergence and low accuracy of physics-informed neural networks (PINNs) when solving complex partial differential equations and stiff ordinary differential equations. To overcome these limitations, the authors propose a curvature-aware optimization framework that introduces natural gradient, self-scaled BFGS, and Broyden-class quasi-Newton optimizers tailored for PINNs, along with an efficient extension to batch training. The proposed approach significantly accelerates convergence and enhances solution accuracy across a range of challenging problems—including the Helmholtz equation, Stokes flow, inviscid Burgers’ equation, high-speed Euler equations, and stiff pharmacokinetic ODEs—yielding results in close agreement with high-order numerical methods.
📝 Abstract
Efficient and robust optimization is essential for neural networks, enabling scientific machine learning models to converge rapidly to very high accuracy -- faithfully capturing complex physical behavior governed by differential equations. In this work, we present advanced optimization strategies to accelerate the convergence of physics-informed neural networks (PINNs) for challenging partial (PDEs) and ordinary differential equations (ODEs). Specifically, we provide efficient implementations of the Natural Gradient (NG) optimizer, Self-Scaling BFGS and Broyden optimizers, and demonstrate their performance on problems including the Helmholtz equation, Stokes flow, inviscid Burgers equation, Euler equations for high-speed flows, and stiff ODEs arising in pharmacokinetics and pharmacodynamics. Beyond optimizer development, we also propose new PINN-based methods for solving the inviscid Burgers and Euler equations, and compare the resulting solutions against high-order numerical methods to provide a rigorous and fair assessment. Finally, we address the challenge of scaling these quasi-Newton optimizers for batched training, enabling efficient and scalable solutions for large data-driven problems.
Problem

Research questions and friction points this paper is trying to address.

physics-informed neural networks
optimization
differential equations
convergence
high-accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Physics-Informed Neural Networks
Natural Gradient
Quasi-Newton Optimization
Curvature-Aware Optimization
Scalable Training
Anas Jnini
Anas Jnini
PhD Student
E
Elham Kiyani
Division of Applied Mathematics, Brown University, Providence, RI, USA
Khemraj Shukla
Khemraj Shukla
Rice University
Applied MathematicsPDEMachine LearningHPC
J
Jorge F. Urban
Departament de Física, Universitat d’Alacant, Spain
N
Nazanin Ahmadi Daryakenari
Center for Biomedical Engineering, Brown University, Providence, RI, USA
J
Johannes Muller
Institute of Mathematics, TU Berlin, Germany
Marius Zeinhofer
Marius Zeinhofer
ETH Postdoctoral Fellow
Scientific Machine LearningNumerical Analysis
George Em Karniadakis
George Em Karniadakis
The Charles Pitts Robinson and John Palmer Barstow Professor of Applied Mathematics and Engineering
Math+Machine LearningProbabilistic Scientific ComputingStochastic Multiscale Modeling