π€ AI Summary
This work addresses the challenges of training instability and poor convergence commonly encountered when applying reinforcement learning (RL) to industrial continuous control tasks. To overcome these limitations, the authors propose a hybrid approach that integrates Covariance Matrix Adaptation Evolution Strategy (CMA-ES) with RL. Specifically, CMA-ES is employed to generate high-quality demonstration trajectories, which are used to warm-start the RL agentβs initialization and to construct a strong oracle serving as a performance benchmark. Experimental results across multiple industrial continuous control simulation tasks demonstrate that the proposed method substantially enhances both the training stability and control performance of RL agents, offering an effective paradigm for synergistically combining evolutionary algorithms with reinforcement learning.
π Abstract
Reinforcement learning (RL) is still rarely applied in industrial control, partly due to the difficulty of training reliable agents for real-world conditions. This work investigates how evolution strategies can support RL in such settings by introducing a continuous-control adaptation of an industrial sorting benchmark. The CMA-ES algorithm is used to generate high-quality demonstrations that warm-start RL agents. Results show that CMA-ES-guided initialization significantly improves stability and performance. Furthermore, the demonstration trajectories generated with the CMA-ES provide a strong oracle reference performance level, which is of interest in its own right. The study delivers a focused proof of concept for hybrid evolutionary-RL approaches and a basis for future, more complex industrial applications.