Data-assimilated model-informed reinforcement learning

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Real-time control of high-dimensional spatiotemporal chaotic systems remains challenging under partial, noisy observations and imperfect models. Method: This paper proposes a hybrid physics-informed and data-driven control framework. It integrates sequential data assimilation—specifically the ensemble Kalman filter (EnKF)—with model-informed off-policy Actor-Critic reinforcement learning, and introduces a control-aware echo state network (CA-ESN) to enable physics-guided state estimation and closed-loop control. A low-order coarse-grained surrogate model is further incorporated to enhance interpretability and computational efficiency. Contribution/Results: Evaluated on chaotic solutions of the Kuramoto–Sivashinsky equation, the framework achieves high-accuracy, real-time full-state estimation and effective chaos suppression using only sparse, noisy measurements and an imprecise system model. It establishes a scalable, robust, and adaptive control paradigm for partially observable chaotic systems, bridging physical modeling with deep learning–based decision-making under uncertainty.

Technology Category

Application Category

📝 Abstract
The control of spatio-temporally chaos is challenging because of high dimensionality and unpredictability. Model-free reinforcement learning (RL) discovers optimal control policies by interacting with the system, typically requiring observations of the full physical state. In practice, sensors often provide only partial and noisy measurements (observations) of the system. The objective of this paper is to develop a framework that enables the control of chaotic systems with partial and noisy observability. The proposed method, data-assimilated model-informed reinforcement learning (DA-MIRL), integrates (i) low-order models to approximate high-dimensional dynamics; (ii) sequential data assimilation to correct the model prediction when observations become available; and (iii) an off-policy actor-critic RL algorithm to adaptively learn an optimal control strategy based on the corrected state estimates. We test DA-MIRL on the spatiotemporally chaotic solutions of the Kuramoto-Sivashinsky equation. We estimate the full state of the environment with (i) a physics-based model, here, a coarse-grained model; and (ii) a data-driven model, here, the control-aware echo state network, which is proposed in this paper. We show that DA-MIRL successfully estimates and suppresses the chaotic dynamics of the environment in real time from partial observations and approximate models. This work opens opportunities for the control of partially observable chaotic systems.
Problem

Research questions and friction points this paper is trying to address.

Control chaotic systems with partial noisy observations
Integrate low-order models and data assimilation for state estimation
Develop adaptive RL for optimal control in high-dimensional chaos
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates low-order models for high-dimensional dynamics
Uses data assimilation to correct model predictions
Applies actor-critic RL for optimal control strategy
🔎 Similar Papers
No similar papers found.
D
D. E. Ozan
Department of Aeronautics, Imperial College London, London SW7 2AZ, UK.
A
Andrea N'ovoa
Department of Aeronautics, Imperial College London, London SW7 2AZ, UK.
Georgios Rigas
Georgios Rigas
Imperial College London
Dynamical SystemsFluid DynamicsTurbulenceAerodynamicsControl
L
Luca Magri
Department of Aeronautics, Imperial College London, London SW7 2AZ, UK., The Alan Turing Institute, London NW1 2DB, UK., DIMEAS, Politecnico di Torino, Torino 24 10129, Italy.