An adjoint method for training data-driven reduced-order models

📅 2026-01-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes a novel framework that integrates continuous-time operator inference with the adjoint-state method to address the poor accuracy and unstable extrapolation of traditional data-driven reduced-order models under sparse sampling and noisy data. By minimizing trajectory loss during training, the approach avoids direct differentiation of noisy measurements and leverages temporal integration for intrinsic regularization. For the first time, the adjoint method is incorporated into continuous-time operator inference, enabling efficient gradient computation and stable optimization. Combining continuous adjoint equations, projected snapshot matching, and gradient-based optimization, the method demonstrates significantly improved accuracy and rolling prediction stability over standard operator inference when tested on the Burgers, Fisher–KPP, and convection–diffusion equations under sparse or noisy data conditions.

Technology Category

Application Category

📝 Abstract
Reduced-order modeling lies at the interface of numerical analysis and data-driven scientific computing, providing principled ways to compress high-fidelity simulations in science and engineering. We propose a training framework that couples a continuous-time form of operator inference with the adjoint-state method to obtain robust data-driven reduced-order models. This method minimizes a trajectory-based loss between reduced-order solutions and projected snapshot data, which removes the need to estimate time derivatives from noisy measurements and provides intrinsic temporal regularization through time integration. We derive the corresponding continuous adjoint equations to compute gradients efficiently and implement a gradient based optimizer to update the reduced model parameters. Each iteration only requires one forward reduced order solve and one adjoint solve, followed by inexpensive gradient assembly, making the method attractive for large-scale simulations. We validate the proposed method on three partial differential equations: viscous Burgers'equation, the two-dimensional Fisher-KPP equation, and an advection-diffusion equation. We perform systematic comparisons against standard operator inference under two perturbation regimes, namely reduced temporal snapshot density and additive Gaussian noise. For clean data, both approaches deliver similar accuracy, but in situations with sparse sampling and noise, the proposed adjoint-based training provides better accuracy and enhanced roll-out stability.
Problem

Research questions and friction points this paper is trying to address.

reduced-order modeling
data-driven
noise robustness
trajectory-based loss
operator inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

adjoint method
operator inference
reduced-order modeling
trajectory-based loss
data-driven scientific computing
🔎 Similar Papers
No similar papers found.
D
Donglin Liu
Centre for Mathematical Sciences, Lund University, Sweden
F
Francisco García Atienza
Centre for Mathematical Sciences, Lund University, Sweden
Mengwu Guo
Mengwu Guo
Associate Professor, Centre for Mathematical Sciences, Lund University
scientific computingmodel reductionuncertainty quantificationscientific machine learning