Streaming Operator Inference for Model Reduction of Large-Scale Dynamical Systems

📅 2026-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scalability and adaptability limitations of traditional Operator Inference methods, which require full data loading and lack support for online updates—challenges that hinder their application to large-scale systems under memory constraints. To overcome these issues, we introduce, for the first time, a streaming learning framework into Operator Inference, proposing a non-intrusive model order reduction approach that leverages incremental singular value decomposition (Incremental SVD) and recursive least squares (RLS). This method enables the online construction and adaptive updating of low-dimensional dynamical models directly from continuous data streams. Experimental results demonstrate that the proposed technique achieves accuracy comparable to batch-based methods while reducing memory consumption by over 99%, achieving a dimensionality compression ratio exceeding 31,000×, and accelerating predictions by several orders of magnitude.

Technology Category

Application Category

📝 Abstract
Projection-based model reduction enables efficient simulation of complex dynamical systems by constructing low-dimensional surrogate models from high-dimensional data. The Operator Inference (OpInf) approach learns such reduced surrogate models through a two-step process: constructing a low-dimensional basis via Singular Value Decomposition (SVD) to compress the data, then solving a linear least-squares (LS) problem to infer reduced operators that govern the dynamics in this compressed space, all without access to the underlying code or full model operators, i.e., non-intrusively. Traditional OpInf operates as a batch learning method, where both the SVD and LS steps process all data simultaneously. This poses a barrier to deployment of the approach on large-scale applications where dataset sizes prevent the loading of all data into memory at once. Additionally, the traditional batch approach does not naturally allow model updates using new data acquired during online computation. To address these limitations, we propose Streaming OpInf, which learns reduced models from sequentially arriving data streams. Our approach employs incremental SVD for adaptive basis construction and recursive LS for streaming operator updates, eliminating the need to store complete data sets while enabling online model adaptation. The approach can flexibly combine different choices of streaming algorithms for numerical linear algebra: we systematically explore the impact of these choices both analytically and numerically to identify effective combinations for accurate reduced model learning. Numerical experiments on benchmark problems and a large-scale turbulent channel flow demonstrate that Streaming OpInf achieves accuracy comparable to batch OpInf while reducing memory requirements by over 99% and enabling dimension reductions exceeding 31,000x, resulting in orders-of-magnitude faster predictions.
Problem

Research questions and friction points this paper is trying to address.

Operator Inference
model reduction
streaming data
large-scale dynamical systems
non-intrusive
Innovation

Methods, ideas, or system contributions that make the work stand out.

Streaming Operator Inference
incremental SVD
recursive least-squares
non-intrusive model reduction
online model adaptation
🔎 Similar Papers
No similar papers found.
T
Tomoki Koike
School of Aerospace Engineering, Georgia Institute of Technology , Atlanta, 30332, GA, USA
P
Prakash Mohan
Computational Science Center, National Laboratory of the Rockies (NLR) , 15013 Denver West Parkway, Golden, 80401, CO, USA
M
M. H. D. Frahan
Computational Science Center, National Laboratory of the Rockies (NLR) , 15013 Denver West Parkway, Golden, 80401, CO, USA
Julie Bessac
Julie Bessac
National Renewable Energy Laboratory
statistical modelingmachine learninguncertainty quantification
Elizabeth Qian
Elizabeth Qian
Georgia Institute of Technology
Scientific machine learningModel reductionMulti-fidelity methodsUncertainty quantification