Transfer learning strategies for accelerating reinforcement-learning-based flow control

📅 2025-10-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In chaotic fluid flow control with multi-fidelity simulations, policy transfer from low- to high-fidelity environments suffers from catastrophic forgetting and unstable adaptation. To address this, we propose a Progressive Neural Network (PNN)-based transfer learning framework—the first application of PNNs to flow control. Our method integrates layer-wise sensitivity analysis with fine-grained, fidelity-aware fine-tuning to enable structured knowledge transfer and retention across fidelity levels. Experiments on the Kuramoto–Sivashinsky system demonstrate that, compared to conventional fine-tuning, our approach significantly improves transfer stability and convergence speed, enhances robustness against overfitting, and maintains consistent performance across diverse physical scenarios. The core contribution is a novel, interpretable, and scalable multi-fidelity transfer paradigm specifically designed for chaotic flow control—bridging fidelity gaps while preserving learned dynamics and enabling principled knowledge reuse.

Technology Category

Application Category

📝 Abstract
This work investigates transfer learning strategies to accelerate deep reinforcement learning (DRL) for multifidelity control of chaotic fluid flows. Progressive neural networks (PNNs), a modular architecture designed to preserve and reuse knowledge across tasks, are employed for the first time in the context of DRL-based flow control. In addition, a comprehensive benchmarking of conventional fine-tuning strategies is conducted, evaluating their performance, convergence behavior, and ability to retain transferred knowledge. The Kuramoto-Sivashinsky (KS) system is employed as a benchmark to examine how knowledge encoded in control policies, trained in low-fidelity environments, can be effectively transferred to high-fidelity settings. Systematic evaluations show that while fine-tuning can accelerate convergence, it is highly sensitive to pretraining duration and prone to catastrophic forgetting. In contrast, PNNs enable stable and efficient transfer by preserving prior knowledge and providing consistent performance gains, and are notably robust to overfitting during the pretraining phase. Layer-wise sensitivity analysis further reveals how PNNs dynamically reuse intermediate representations from the source policy while progressively adapting deeper layers to the target task. Moreover, PNNs remain effective even when the source and target environments differ substantially, such as in cases with mismatched physical regimes or control objectives, where fine-tuning strategies often result in suboptimal adaptation or complete failure of knowledge transfer. The results highlight the potential of novel transfer learning frameworks for robust, scalable, and computationally efficient flow control that can potentially be applied to more complex flow configurations.
Problem

Research questions and friction points this paper is trying to address.

Accelerating deep reinforcement learning for chaotic fluid flow control
Evaluating transfer learning strategies to prevent catastrophic forgetting
Enabling robust knowledge transfer across different fidelity flow environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive neural networks enable stable knowledge transfer
Comprehensive benchmarking of fine-tuning strategies conducted
PNNs dynamically reuse representations across different environments
🔎 Similar Papers
No similar papers found.