๐ค AI Summary
Time-dependent partial differential equations (PDEs) incur prohibitive computational costs in multi-query scenarios (e.g., real-time forecasting, optimal control, uncertainty quantification); existing reduced-order models (ROMs) suffer from poor cross-mesh generalizability, while neural operators lack rigorous discrete error quantification. Method: We propose a novel paradigm integrating ROMsโ rigorous error analysis with neural operatorsโ infinite-dimensional mapping capability. We establish the first theoretical bound on discretization error, design a vector-to-vector network architecture, and introduce a function-space-parameterized error control mechanism enabling unified modeling of spatial super-resolution, temporal extrapolation, and discrete robustness. Contribution/Results: Experiments demonstrate that, while preserving input generalizability, our method significantly outperforms state-of-the-art neural operators in super-resolution accuracy and mesh-transfer robustness, and achieves superior temporal extrapolation performance.
๐ Abstract
Time-dependent partial differential equations are ubiquitous in physics-based modeling, but they remain computationally intensive in many-query scenarios, such as real-time forecasting, optimal control, and uncertainty quantification. Reduced-order modeling (ROM) addresses these challenges by constructing a low-dimensional surrogate model but relies on a fixed discretization, which limits flexibility across varying meshes during evaluation. Operator learning approaches, such as neural operators, offer an alternative by parameterizing mappings between infinite-dimensional function spaces, enabling adaptation to data across different resolutions. Whereas ROM provides rigorous numerical error estimates, neural operator learning largely focuses on discretization convergence and invariance without quantifying the error between the infinite-dimensional and the discretized operators. This work introduces the reduced-order neural operator modeling (RONOM) framework, which bridges concepts from ROM and operator learning. We establish a discretization error bound analogous to those in ROM, and get insights into RONOM's discretization convergence and discretization robustness. Moreover, two numerical examples are presented that compare RONOM to existing neural operators for solving partial differential equations. The results demonstrate that RONOM using standard vector-to-vector neural networks achieves comparable performance in input generalization and superior performance in both spatial super-resolution and discretization robustness, while also offering novel insights into temporal super-resolution scenarios.