Operator Learning at Machine Precision

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Neural operator learning often suffers from accuracy bottlenecks, failing to surpass traditional reduced-order models. To address this, we propose CHONKNORIS—a novel method that first integrates Newton–Kantorovich iteration with operator learning: it regresses the Cholesky factor of the elliptic operator within Tikhonov-regularized iterative schemes, yielding an unfoldable, compression-guaranteed neural architecture capable of machine-precision solutions for both forward and inverse nonlinear PDEs. We further generalize it into FONKNORIS, a framework enabling strong generalization to unseen PDEs (e.g., Klein–Gordon and Sine–Gordon equations). Our approach unifies Cholesky factor regression, differentiable iterative unfolding, and mixture-of-experts integration, trained end-to-end. Evaluated on six challenging benchmarks—including nonlinear elliptic equations, Burgers equation, Darcy flow, Calderón problem, wave scattering, and seismic imaging—CHONKNORIS achieves machine precision across all tasks.

Technology Category

Application Category

📝 Abstract
Neural operator learning methods have garnered significant attention in scientific computing for their ability to approximate infinite-dimensional operators. However, increasing their complexity often fails to substantially improve their accuracy, leaving them on par with much simpler approaches such as kernel methods and more traditional reduced-order models. In this article, we set out to address this shortcoming and introduce CHONKNORIS (Cholesky Newton--Kantorovich Neural Operator Residual Iterative System), an operator learning paradigm that can achieve machine precision. CHONKNORIS draws on numerical analysis: many nonlinear forward and inverse PDE problems are solvable by Newton-type methods. Rather than regressing the solution operator itself, our method regresses the Cholesky factors of the elliptic operator associated with Tikhonov-regularized Newton--Kantorovich updates. The resulting unrolled iteration yields a neural architecture whose machine-precision behavior follows from achieving a contractive map, requiring far lower accuracy than end-to-end approximation of the solution operator. We benchmark CHONKNORIS on a range of nonlinear forward and inverse problems, including a nonlinear elliptic equation, Burgers' equation, a nonlinear Darcy flow problem, the Calderón problem, an inverse wave scattering problem, and a problem from seismic imaging. We also present theoretical guarantees for the convergence of CHONKNORIS in terms of the accuracy of the emulated Cholesky factors. Additionally, we introduce a foundation model variant, FONKNORIS (Foundation Newton--Kantorovich Neural Operator Residual Iterative System), which aggregates multiple pre-trained CHONKNORIS experts for diverse PDEs to emulate the solution map of a novel nonlinear PDE. Our FONKNORIS model is able to accurately solve unseen nonlinear PDEs such as the Klein--Gordon and Sine--Gordon equations.
Problem

Research questions and friction points this paper is trying to address.

Achieving machine precision in neural operator learning for scientific computing
Improving accuracy beyond simple kernel methods and reduced-order models
Solving nonlinear forward and inverse PDE problems with Newton-type methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Regresses Cholesky factors for Newton-Kantorovich updates
Uses unrolled iteration for contractive mapping
Achieves machine precision via numerical analysis principles
🔎 Similar Papers
No similar papers found.
A
Aras Bacho
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
A
Aleksei G. Sorokin
Department of Applied Mathematics, Illinois Institute of Technology, Chicago, IL, USA
Xianjin Yang
Xianjin Yang
California Institute of Technology
Partial Differential EquationsMean Field GamesOptimizationGaussian Processes
Théo Bourdais
Théo Bourdais
California Institute of Technology
Machine LearningGaussian Processes
E
Edoardo Calvello
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
M
Matthieu Darcy
Department of Computing and Mathematical Sciences, California Institute of Technology, Pasadena, CA, USA
A
Alexander Hsu
Department of Applied Mathematics, University of Washington, Seattle, WA, USA
Bamdad Hosseini
Bamdad Hosseini
University of Washington
Inverse ProblemsApplied MathematicsScientific Computing
Houman Owhadi
Houman Owhadi
IBM Professor of Applied and Computational Mathematics and Control and Dynamical Systems. Caltech.
SciML. Kernel/GP Methods. UQ. Stochastic/Mulstiscale/Geometric Integration/Analysis..