Improved Robustness of Deep Reinforcement Learning for Control of Time-Varying Systems by Bounded Extremum Seeking

📅 2025-10-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the poor robustness of deep reinforcement learning (DRL) controllers for nonlinear time-varying systems and the slow convergence and susceptibility to local minima of bounded extremum seeking (ES), this paper proposes a hybrid adaptive control framework integrating DRL with bounded ES. The framework leverages DRL for data-driven, rapid policy adaptation while employing bounded ES to provide model-free, robust feedback compensation; their synergy ensures global convergence and resilience against time-varying disturbances. Key innovations include: (i) the first deep coupling of DRL’s online learning capability with the model-free stability guarantees of bounded ES, and (ii) an adaptive parameter tuning mechanism to prevent performance degradation. Experiments on generic time-varying systems and a real-world particle accelerator beam transport system demonstrate that the proposed method significantly improves tracking accuracy (32% reduction in error), response speed (2.1× faster convergence), and robustness under environmental time variations.

Technology Category

Application Category

📝 Abstract
In this paper, we study the use of robust model independent bounded extremum seeking (ES) feedback control to improve the robustness of deep reinforcement learning (DRL) controllers for a class of nonlinear time-varying systems. DRL has the potential to learn from large datasets to quickly control or optimize the outputs of many-parameter systems, but its performance degrades catastrophically when the system model changes rapidly over time. Bounded ES can handle time-varying systems with unknown control directions, but its convergence speed slows down as the number of tuned parameters increases and, like all local adaptive methods, it can get stuck in local minima. We demonstrate that together, DRL and bounded ES result in a hybrid controller whose performance exceeds the sum of its parts with DRL taking advantage of historical data to learn how to quickly control a many-parameter system to a desired setpoint while bounded ES ensures its robustness to time variations. We present a numerical study of a general time-varying system and a combined ES-DRL controller for automatic tuning of the Low Energy Beam Transport section at the Los Alamos Neutron Science Center linear particle accelerator.
Problem

Research questions and friction points this paper is trying to address.

Enhancing DRL robustness for nonlinear time-varying system control
Overcoming catastrophic performance degradation from rapid model changes
Combining bounded ES and DRL for improved adaptive control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Combining deep reinforcement learning with bounded extremum seeking
Using DRL for fast control of multi-parameter systems
Employing bounded ES to ensure robustness against time variations
🔎 Similar Papers
No similar papers found.
S
Shaifalee Saxena
Department of Electrical and Computer Engineering, University of New Mexico, Albuquerque, NM 87106, USA
A
Alan Williams
Los Alamos National Lab, Los Alamos, NM 87547, USA
Rafael Fierro
Rafael Fierro
Electrical and Computer Engineering, University on New Mexico
Multi-robot systemsAerial vehiclesSpace roboticsControl Systems
Alexander Scheinker
Alexander Scheinker
Los Alamos National Laboratory
Adaptive Generative Deep LearningTime Varying Nonlinear Control TheoryPhysics Constrained ML