๐ค AI Summary
Traditional numerical methods for partial differential equations (PDEs) struggle with the curse of dimensionality, high computational costs, and reliance on domain-specific discretization, limiting their effectiveness in tackling nonlinear, multivariate forward and inverse problems as well as equation discovery. This work proposes a novel framework that integrates a learning-guided strategy with the Kansa collocation method, extending continuous normalizing flows (CNFs) for the first time to nonlinear PDE systems with multiple dependent variables. The approach enables end-to-end self-tuning and mesh-free discretization, providing a unified treatment of forward simulation, parameter inversion, and equation discovery. It achieves high accuracy across multiple benchmark problems and is accompanied by an open-source neural PDE solver along with a comprehensive survey of the field.
๐ Abstract
Partial Differential Equations are precise in modelling the physical, biological and graphical phenomena. However, the numerical methods suffer from the curse of dimensionality, high computation costs and domain-specific discretization. We aim to explore pros and cons of different PDE solvers, and apply them to specific scientific simulation problems, including forwarding solution, inverse problems and equations discovery. In particular, we extend the recent CNF (NeurIPS 2023) framework solver to multi-dependent-variable and non-linear settings, together with down-stream applications. The outcomes include implementation of selected methods, self-tuning techniques, evaluation on benchmark problems and a comprehensive survey of neural PDE solvers and scientific simulation applications.