🤖 AI Summary
To address the high computational cost of repeatedly training physics-informed neural networks (PINNs) for multi-query inverse problems, this paper proposes the IP-Basis PINNs framework, enabling offline-online decoupling: during the offline phase, a deep network learns a parameterized basis for the PDE solution space; during the online phase, only lightweight linear combination coefficients are fine-tuned to rapidly adapt to new observations. Key contributions include: (1) a joint loss function that simultaneously optimizes solution reconstruction and parameter identification; (2) efficient PDE residual computation via forward-mode automatic differentiation; and (3) a robust offline validation strategy with early stopping to enhance generalization and stability. The method supports both constant and functional parameter estimation. On three benchmark inverse tasks, it significantly outperforms standard PINNs—achieving up to an order-of-magnitude speedup per query—while maintaining robustness under sparse, noisy, and partially unknown (e.g., unknown functional terms) conditions.
📝 Abstract
Solving inverse problems with Physics-Informed Neural Networks (PINNs) is computationally expensive for multi-query scenarios, as each new set of observed data requires a new, expensive training procedure. We present Inverse-Parameter Basis PINNs (IP-Basis PINNs), a meta-learning framework that extends the foundational work of Desai et al. (2022) to enable rapid and efficient inference for inverse problems. Our method employs an offline-online decomposition: a deep network is first trained offline to produce a rich set of basis functions that span the solution space of a parametric differential equation. For each new inverse problem online, this network is frozen, and solutions and parameters are inferred by training only a lightweight linear output layer against observed data. Key innovations that make our approach effective for inverse problems include: (1) a novel online loss formulation for simultaneous solution reconstruction and parameter identification, (2) a significant reduction in computational overhead via forward-mode automatic differentiation for PDE loss evaluation, and (3) a non-trivial validation and early-stopping mechanism for robust offline training. We demonstrate the efficacy of IP-Basis PINNs on three diverse benchmarks, including an extension to universal PINNs for unknown functional terms-showing consistent performance across constant and functional parameter estimation, a significant speedup per query over standard PINNs, and robust operation with scarce and noisy data.