System Identification and Control Using Lyapunov-Based Deep Neural Networks without Persistent Excitation: A Concurrent Learning Approach

📅 2025-05-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Deep neural networks (DNNs) struggle to simultaneously achieve high-precision trajectory tracking and online system identification in adaptive control, particularly under non-persistent excitation (non-PE) conditions where parameter convergence cannot be guaranteed. Method: This paper proposes a model-free concurrent learning adaptive framework that eliminates the PE requirement. It introduces the first Lyapunov-stable, real-time weight update law for all layers of a fully connected DNN, driven by observation error, ensuring uniform ultimate boundedness of parameter estimation errors, tracking errors, and observation errors within a neighborhood of the origin. Contribution/Results: The approach requires no prior dynamical knowledge. Extensive simulations across multiple systems demonstrate a 40.5%–73.6% improvement in function approximation accuracy, a 58.88%–74.75% enhancement in extrapolation generalization capability, and tracking performance and energy consumption comparable to baseline methods.

Technology Category

Application Category

📝 Abstract
Deep Neural Networks (DNNs) are increasingly used in control applications due to their powerful function approximation capabilities. However, many existing formulations focus primarily on tracking error convergence, often neglecting the challenge of identifying the system dynamics using the DNN. This paper presents the first result on simultaneous trajectory tracking and online system identification using a DNN-based controller, without requiring persistent excitation. Two new concurrent learning adaptation laws are constructed for the weights of all the layers of the DNN, achieving convergence of the DNN's parameter estimates to a neighborhood of their ideal values, provided the DNN's Jacobian satisfies a finite-time excitation condition. A Lyapunov-based stability analysis is conducted to ensure convergence of the tracking error, weight estimation errors, and observer errors to a neighborhood of the origin. Simulations performed on a range of systems and trajectories, with the same initial and operating conditions, demonstrated 40.5% to 73.6% improvement in function approximation performance compared to the baseline, while maintaining a similar tracking error and control effort. Simulations evaluating function approximation capabilities on data points outside of the trajectory resulted in 58.88% and 74.75% improvement in function approximation compared to the baseline.
Problem

Research questions and friction points this paper is trying to address.

Simultaneous trajectory tracking and online system identification using DNNs
Achieving DNN parameter convergence without persistent excitation
Improving function approximation performance in control applications
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lyapunov-based DNN control without persistent excitation
Concurrent learning for DNN weight adaptation
Simultaneous trajectory tracking and system identification
🔎 Similar Papers
No similar papers found.
Rebecca G. Hart
Rebecca G. Hart
Univeristy of Florida
Nonlinear ControlAdaptive ControlPhysics-Informed LearningControl Systems
O
O. S. Patil
Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611
Zachary I. Bell
Zachary I. Bell
Air Force Research Lab
nonlinear controladaptive controlintermittent sensingvision based estimationrobotics
W
Warren E. Dixon
Department of Mechanical and Aerospace Engineering, University of Florida, Gainesville, FL 32611