🤖 AI Summary
Neural operator training is severely constrained by reliance on data generated via traditional numerical solvers, which are computationally expensive, discretization-dependent, and introduce approximation errors.
Method: We propose a solver-free backward synthetic data generation framework: candidate solutions $u_j$ are randomly sampled from the solution space (e.g., $H_0^1(Omega)$), and their exact source terms $f_j = mathcal{L}u_j$ are computed directly via automatic differentiation—yielding infinite, zero-error $(f_j, u_j)$ training pairs without numerical discretization.
Contribution/Results: This work introduces the first “solution-to-source” inverse generation paradigm, eliminating dependence on numerical solvers while preserving mathematical rigor and computational scalability. Experiments demonstrate that neural operators trained exclusively on synthetic data achieve generalization performance on multiple PDE benchmarks comparable to—or even exceeding—that of models trained on solver-generated data.
📝 Abstract
Recent advances in the literature show promising potential of deep learning methods, particularly neural operators, in obtaining numerical solutions to partial differential equations (PDEs) beyond the reach of current numerical solvers. However, existing data-driven approaches often rely on training data produced by numerical PDE solvers (e.g., finite difference or finite element methods). We introduce a"backward"data generation method that avoids solving the PDE numerically: by randomly sampling candidate solutions $u_j$ from the appropriate solution space (e.g., $H_0^1(Omega)$), we compute the corresponding right-hand side $f_j$ directly from the equation by differentiation. This produces training pairs ${(f_j, u_j)}$ by computing derivatives rather than solving a PDE numerically for each data point, enabling fast, large-scale data generation consisting of exact solutions. Experiments indicate that models trained on this synthetic data generalize well when tested on data produced by standard solvers. While the idea is simple, we hope this method will expand the potential of neural PDE solvers that do not rely on classical numerical solvers to generate their data.