🤖 AI Summary
This work addresses the inefficiency of reinforcement learning (RL) for controlling fluid systems, which often stems from reliance on costly direct numerical simulations and the susceptibility of surrogate models to policy-induced distributional shift. To overcome these challenges, the authors propose a Linear Recurrent Autoencoder Network (LRAN) grounded in Koopman operator theory to construct a surrogate model for the Rayleigh-Bénard convection system. They further introduce a policy-aware training mechanism that integrates both high-fidelity simulation data and surrogate-generated trajectories during pretraining. This approach effectively mitigates distributional shift and substantially enhances prediction accuracy in policy-relevant state regions. As a result, the method maintains state-of-the-art control performance while reducing RL training time by over 40%.
📝 Abstract
Training reinforcement learning (RL) agents to control fluid dynamics systems is computationally expensive due to the high cost of direct numerical simulations (DNS) of the governing equations. Surrogate models offer a promising alternative by approximating the dynamics at a fraction of the computational cost, but their feasibility as training environments for RL is limited by distribution shifts, as policies induce state distributions not covered by the surrogate training data. In this work, we investigate the use of Linear Recurrent Autoencoder Networks (LRANs) for accelerating RL-based control of 2D Rayleigh-Bénard convection. We evaluate two training strategies: a surrogate trained on precomputed data generated with random actions, and a policy-aware surrogate trained iteratively using data collected from an evolving policy. Our results show that while surrogate-only training leads to reduced control performance, combining surrogates with DNS in a pretraining scheme recovers state-of-the-art performance while reducing training time by more than 40%. We demonstrate that policy-aware training mitigates the effects of distribution shift, enabling more accurate predictions in policy-relevant regions of the state space.