π€ AI Summary
High-resolution terrain maps in UAV relay communications incur substantial onboard storage overhead and impede convergence of deep reinforcement learning (DRL) algorithms. Method: This paper proposes a lightweight path planning framework tailored for the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. It integrates principal component analysis (PCA) for state-space dimensionality reduction, joint terrainβuser localization modeling, composite sample generation, prioritized experience replay (PER), and an MSE-MAE hybrid loss function. Contribution/Results: The method significantly compresses state representation dimensions and experience buffer requirements while preserving path quality. Experiments demonstrate a ~75% reduction in training episodes required for convergence compared to standard TD3, alongside marked decreases in onboard memory and computational load. This enables efficient, real-time path planning for resource-constrained UAVs.
π Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly essential in various fields such as surveillance, reconnaissance, and telecommunications. This study aims to develop a learning algorithm for the path planning of UAV wireless communication relays, which can reduce storage requirements and accelerate Deep Reinforcement Learning (DRL) convergence. Assuming the system possesses terrain maps of the area and can estimate user locations using localization algorithms or direct GPS reporting, it can input these parameters into the learning algorithms to achieve optimized path planning performance. However, higher resolution terrain maps are necessary to extract topological information such as terrain height, object distances, and signal blockages. This requirement increases memory and storage demands on UAVs while also lengthening convergence times in DRL algorithms. Similarly, defining the telecommunication coverage map in UAV wireless communication relays using these terrain maps and user position estimations demands higher memory and storage utilization for the learning path planning algorithms. Our approach reduces path planning training time by applying a dimensionality reduction technique based on Principal Component Analysis (PCA), sample combination, Prioritized Experience Replay (PER), and the combination of Mean Squared Error (MSE) and Mean Absolute Error (MAE) loss calculations in the coverage map estimates, thereby enhancing a Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm. The proposed solution reduces the convergence episodes needed for basic training by approximately four times compared to the traditional TD3.