🤖 AI Summary
Achieving rapid, stable, and robust position–attitude cooperative tracking for quadrotors under arbitrary initial states remains challenging due to strong dynamic coupling, stringent transient/steady-state performance requirements, and sensitivity to initialization and disturbances.
Method: This paper proposes a three-stage curriculum learning framework: starting from fixed-hover stabilization, progressively advancing to full-state tracking under random initial conditions, while explicitly enforcing both transient and steady-state performance constraints. A novel staged, incremental curriculum strategy is designed, coupled with an additive reward function analytically embedding performance metrics. Leveraging PPO, the approach integrates dynamic random initialization, performance-driven reward modeling, and perturbation-robustness validation.
Contribution/Results: Experiments demonstrate that, compared to baseline single-stage PPO, the method achieves 2.3× faster convergence and 60% lower computational overhead at equivalent tracking performance, while maintaining high-precision, robust control against diverse initial-state deviations and external disturbances.
📝 Abstract
This article introduces a curriculum learning approach to develop a reinforcement learning-based robust stabilizing controller for a Quadrotor that meets predefined performance criteria. The learning objective is to achieve desired positions from random initial conditions while adhering to both transient and steady-state performance specifications. This objective is challenging for conventional one-stage end-to-end reinforcement learning, due to the strong coupling between position and orientation dynamics, the complexity in designing and tuning the reward function, and poor sample efficiency, which necessitates substantial computational resources and leads to extended convergence times. To address these challenges, this work decomposes the learning objective into a three-stage curriculum that incrementally increases task complexity. The curriculum begins with learning to achieve stable hovering from a fixed initial condition, followed by progressively introducing randomization in initial positions, orientations and velocities. A novel additive reward function is proposed, to incorporate transient and steady-state performance specifications. The results demonstrate that the Proximal Policy Optimization (PPO)-based curriculum learning approach, coupled with the proposed reward structure, achieves superior performance compared to a single-stage PPO-trained policy with the same reward function, while significantly reducing computational resource requirements and convergence time. The curriculum-trained policy's performance and robustness are thoroughly validated under random initial conditions and in the presence of disturbances.