🤖 AI Summary
Wind farm power output is significantly reduced by wake effects, and existing control methods lack robustness under time-varying wind conditions.
Method: This paper proposes a deep reinforcement learning (DRL)-based flow-field cooperative control strategy tailored for dynamic wind conditions, jointly optimizing yaw angles of individual turbines to mitigate wakes. We introduce a novel DRL architecture integrating Graph Attention Networks (GAT) with multi-head self-attention, coupled with a physics-informed reward function and a progressive training scheme to enhance generalization across arbitrary time-varying wind speeds and directions.
Contribution/Results: Experiments demonstrate that the proposed method reduces training steps by ~90% compared to baseline approaches, achieves up to a 14% increase in energy production under dynamic wind conditions, and exhibits substantially improved robustness. This work establishes a scalable, highly adaptive paradigm for real-time intelligent control of large-scale wind farms.
📝 Abstract
Within wind farms, wake effects between turbines can significantly reduce overall energy production. Wind farm flow control encompasses methods designed to mitigate these effects through coordinated turbine control. Wake steering, for example, consists in intentionally misaligning certain turbines with the wind to optimize airflow and increase power output. However, designing a robust wake steering controller remains challenging, and existing machine learning approaches are limited to quasi-static wind conditions or small wind farms. This work presents a new deep reinforcement learning methodology to develop a wake steering policy that overcomes these limitations. Our approach introduces a novel architecture that combines graph attention networks and multi-head self-attention blocks, alongside a novel reward function and training strategy. The resulting model computes the yaw angles of each turbine, optimizing energy production in time-varying wind conditions. An empirical study conducted on steady-state, low-fidelity simulation, shows that our model requires approximately 10 times fewer training steps than a fully connected neural network and achieves more robust performance compared to a strong optimization baseline, increasing energy production by up to 14 %. To the best of our knowledge, this is the first deep reinforcement learning-based wake steering controller to generalize effectively across any time-varying wind conditions in a low-fidelity, steady-state numerical simulation setting.