Neural-POD: A Plug-and-Play Neural Operator Framework for Infinite-Dimensional Functional Nonlinear Proper Orthogonal Decomposition

📅 2026-02-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes Neural-POD, a novel framework that overcomes the limitations of traditional AI-for-Science approaches, which often fail to generalize across new parameters or discretizations due to dependence on fixed grids or resolutions. By constructing nonlinear orthogonal bases in infinite-dimensional function spaces via neural networks, Neural-POD reformulates basis construction as a sequence of residual minimization problems, analogous to a nonlinear, learnable Gram–Schmidt process that incrementally captures data structure. The method transcends the linearity constraints of classical Proper Orthogonal Decomposition (POD), enabling optimization under arbitrary norms, resolution-invariant mappings, and effective nonlinear feature extraction. It is designed for seamless integration into reduced-order modeling and operator learning pipelines. Numerical experiments on complex spatiotemporal systems—including the Burgers and Navier–Stokes equations—demonstrate its robustness and efficacy in bridging classical model reduction with modern operator learning paradigms.

Technology Category

Application Category

📝 Abstract
The rapid development of AI for Science is often hindered by the "discretization", where learned representations remain restricted to the specific grids or resolutions used during training. We propose the Neural Proper Orthogonal Decomposition (Neural-POD), a plug-and-play neural operator framework that constructs nonlinear, orthogonal basis functions in infinite-dimensional space using neural networks. Unlike the classical Proper Orthogonal Decomposition (POD), which is limited to linear subspace approximations obtained through singular value decomposition (SVD), Neural-POD formulates basis construction as a sequence of residual minimization problems solved through neural network training. Each basis function is obtained by learning to represent the remaining structure in the data, following a process analogous to Gram--Schmidt orthogonalization. This neural formulation introduces several key advantages over classical POD: it enables optimization in arbitrary norms (e.g., $L^2$, $L^1$), learns mappings between infinite-dimensional function spaces that is resolution-invariant, generalizes effectively to unseen parameter regimes, and inherently captures nonlinear structures in complex spatiotemporal systems. The resulting basis functions are interpretable, reusable, and enabling integration into both reduced order modeling (ROM) and operator learning frameworks such as deep operator learning (DeepONet). We demonstrate the robustness of Neural-POD with different complex spatiotemporal systems, including the Burgers' and Navier-Stokes equations. We further show that Neural-POD serves as a high performance, plug-and-play bridge between classical Galerkin projection and operator learning that enables consistent integration with both projection-based reduced order models and DeepONet frameworks.
Problem

Research questions and friction points this paper is trying to address.

discretization
Proper Orthogonal Decomposition
nonlinear structures
resolution-invariance
infinite-dimensional function spaces
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neural Operator
Nonlinear Proper Orthogonal Decomposition
Infinite-Dimensional Function Space
Resolution-Invariant Learning
Reduced Order Modeling
🔎 Similar Papers
No similar papers found.