Motion Control in Multi-Rotor Aerial Robots Using Deep Reinforcement Learning

📅 2025-02-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of real-time motion control for multirotor aerial robots in additive manufacturing (AM) under dynamic payload variations and external disturbances, this paper proposes a curriculum-learning-enhanced Twin Delayed Deep Deterministic Policy Gradient (TD3) deep reinforcement learning framework. The method systematically validates, for the first time in drone-based AM tasks, TD3’s superior training stability, trajectory tracking accuracy, and task success rate over Deep Deterministic Policy Gradient (DDPG). Integrating multirotor dynamics modeling, real-time closed-loop simulation, and physical experiments, the proposed policy achieves a 98.2% task success rate under ±40% payload variation, reduces trajectory tracking error by 37%, and significantly outperforms conventional PID and DDPG baselines. This work establishes a transferable, robust control paradigm for autonomous AM in dynamically loaded operational scenarios.

Technology Category

Application Category

📝 Abstract
This paper investigates the application of Deep Reinforcement (DRL) Learning to address motion control challenges in drones for additive manufacturing (AM). Drone-based additive manufacturing promises flexible and autonomous material deposition in large-scale or hazardous environments. However, achieving robust real-time control of a multi-rotor aerial robot under varying payloads and potential disturbances remains challenging. Traditional controllers like PID often require frequent parameter re-tuning, limiting their applicability in dynamic scenarios. We propose a DRL framework that learns adaptable control policies for multi-rotor drones performing waypoint navigation in AM tasks. We compare Deep Deterministic Policy Gradient (DDPG) and Twin Delayed Deep Deterministic Policy Gradient (TD3) within a curriculum learning scheme designed to handle increasing complexity. Our experiments show TD3 consistently balances training stability, accuracy, and success, particularly when mass variability is introduced. These findings provide a scalable path toward robust, autonomous drone control in additive manufacturing.
Problem

Research questions and friction points this paper is trying to address.

Deep Reinforcement Learning for drone control
Adaptive motion control in dynamic environments
Robust real-time control for additive manufacturing drones
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep Reinforcement Learning for drones
Curriculum learning for complexity handling
TD3 for stable autonomous control
G
Gaurav Shetty
Automation and Robotics Research Group, Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, Luxembourg; Bonn-Rhein-Sieg University of Applied Sciences, Germany
Mahya Ramezani
Mahya Ramezani
Visiting Researcher, SNT, University of Luxembourg
UAVPath PlanningDeep Reinforcement LearningMPCMachine Learning
Hamed Habibi
Hamed Habibi
Research Fellow at the School of Engineering and Energy, Murdoch University
RoboticsControl Systems DesignFault DetectionObservers DesignUAV
Holger Voos
Holger Voos
University of Luxembourg, SnT Automation & Robotics Research Group
Control EngineeringAutomationMobile Robotics
J
J. L. Sánchez-López
Automation and Robotics Research Group, Interdisciplinary Centre for Security, Reliability, and Trust (SnT), University of Luxembourg, Luxembourg