🤖 AI Summary
Deep neural networks (DNNs) struggle to simultaneously achieve high-precision trajectory tracking and online system identification in adaptive control, particularly under non-persistent excitation (non-PE) conditions where parameter convergence cannot be guaranteed. Method: This paper proposes a model-free concurrent learning adaptive framework that eliminates the PE requirement. It introduces the first Lyapunov-stable, real-time weight update law for all layers of a fully connected DNN, driven by observation error, ensuring uniform ultimate boundedness of parameter estimation errors, tracking errors, and observation errors within a neighborhood of the origin. Contribution/Results: The approach requires no prior dynamical knowledge. Extensive simulations across multiple systems demonstrate a 40.5%–73.6% improvement in function approximation accuracy, a 58.88%–74.75% enhancement in extrapolation generalization capability, and tracking performance and energy consumption comparable to baseline methods.
📝 Abstract
Deep Neural Networks (DNNs) are increasingly used in control applications due to their powerful function approximation capabilities. However, many existing formulations focus primarily on tracking error convergence, often neglecting the challenge of identifying the system dynamics using the DNN. This paper presents the first result on simultaneous trajectory tracking and online system identification using a DNN-based controller, without requiring persistent excitation. Two new concurrent learning adaptation laws are constructed for the weights of all the layers of the DNN, achieving convergence of the DNN's parameter estimates to a neighborhood of their ideal values, provided the DNN's Jacobian satisfies a finite-time excitation condition. A Lyapunov-based stability analysis is conducted to ensure convergence of the tracking error, weight estimation errors, and observer errors to a neighborhood of the origin. Simulations performed on a range of systems and trajectories, with the same initial and operating conditions, demonstrated 40.5% to 73.6% improvement in function approximation performance compared to the baseline, while maintaining a similar tracking error and control effort. Simulations evaluating function approximation capabilities on data points outside of the trajectory resulted in 58.88% and 74.75% improvement in function approximation compared to the baseline.