Active Learning for Neural PDE Solvers

📅 2024-08-02
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Neural PDE solvers suffer from low training efficiency due to reliance on large volumes of expensive, high-fidelity training data. To address this, we propose AL4PDE—a solver-in-the-loop active learning framework for neural PDE solving. AL4PDE introduces the first modular active learning benchmark tailored to neural PDE solvers, integrating batched uncertainty sampling, feature-space diversity assessment, and solver-in-the-loop data generation. It supports multi-parameter PDE modeling and enables collaborative optimization with advanced surrogate models. Experiments demonstrate that AL4PDE reduces average error by up to 71% compared to random sampling and significantly suppresses worst-case error. The actively generated data exhibit cross-task consistency and strong transferability, generalizing effectively to surrogate models unseen during training. Our core contribution lies in systematically revealing how active learning enhances accuracy, robustness, and data reusability in neural PDE solving—establishing foundational insights for efficient, data-aware scientific machine learning.

Technology Category

Application Category

📝 Abstract
Solving partial differential equations (PDEs) is a fundamental problem in science and engineering. While neural PDE solvers can be more efficient than established numerical solvers, they often require large amounts of training data that is costly to obtain. Active learning (AL) could help surrogate models reach the same accuracy with smaller training sets by querying classical solvers with more informative initial conditions and PDE parameters. While AL is more common in other domains, it has yet to be studied extensively for neural PDE solvers. To bridge this gap, we introduce AL4PDE, a modular and extensible active learning benchmark. It provides multiple parametric PDEs and state-of-the-art surrogate models for the solver-in-the-loop setting, enabling the evaluation of existing and the development of new AL methods for neural PDE solving. We use the benchmark to evaluate batch active learning algorithms such as uncertainty- and feature-based methods. We show that AL reduces the average error by up to 71% compared to random sampling and significantly reduces worst-case errors. Moreover, AL generates similar datasets across repeated runs, with consistent distributions over the PDE parameters and initial conditions. The acquired datasets are reusable, providing benefits for surrogate models not involved in the data generation.
Problem

Research questions and friction points this paper is trying to address.

Reducing training data cost for neural PDE solvers
Evaluating active learning in neural PDE solving
Improving accuracy and reusability of surrogate models
Innovation

Methods, ideas, or system contributions that make the work stand out.

Active learning reduces training data needs
Modular benchmark for neural PDE solvers
Batch AL methods cut error significantly
🔎 Similar Papers
2024-10-09International Conference on Learning RepresentationsCitations: 2
Daniel Musekamp
Daniel Musekamp
University of Stuttgart
M
Marimuthu Kalimuthu
University of Stuttgart, SimTech, IMPRS-IS, NEC Labs Europe
D
David Holzmuller
INRIA Paris, Ecole Normale Supérieure, PSL University
M
Makoto Takamoto
NEC Labs Europe
Mathias Niepert
Mathias Niepert
University of Stuttgart & NEC Labs Europe
Machine learning