🤖 AI Summary
This work addresses the limitations of traditional reinforcement learning in trajectory planning for autonomous aerial vehicles (AAVs), where sparse scalar rewards lead to credit assignment difficulties and unstable training. The authors propose L4V, a framework that constructs an end-to-end differentiable computational graph jointly modeling AAV kinematics, channel gain, and data collection dynamics. By employing backpropagation through time (BPTT), L4V computes exact analytical dense policy gradients, replacing high-variance reward signals. The approach integrates deterministic neural policies, temporal smoothness regularization, and gradient clipping to effectively guide long-horizon nonlinear action effects. Experimental results demonstrate that L4V significantly outperforms baseline methods—including genetic algorithms, DQN, A2C, and DDPG—in terms of mission completion time, average transmission rate, and training efficiency.
📝 Abstract
Autonomous aerial vehicles (AAVs) empower sixth-generation (6G) Internet-of-Things (IoT) networks through mobility-driven data collection. However, conventional reward-driven reinforcement learning for AAV trajectory planning suffers from severe credit assignment issues and training instability, because sparse scalar rewards fail to capture the long-term and nonlinear effects of sequential movements. To address these challenges, this paper proposes Learn for Variation (L4V), a gradient-informed trajectory learning framework that replaces high-variance scalar reward signals with dense and analytically grounded policy gradients. Particularly, the coupled evolution of AAV kinematics, distance-dependent channel gains, and per-user data-collection progress is first unrolled into an end-to-end differentiable computational graph. Backpropagation through time then serves as a discrete adjoint solver, which propagates exact sensitivities from the cumulative mission objective to every control action and policy parameter. These structured gradients are used to train a deterministic neural policy with temporal smoothness regularization and gradient clipping. Extensive simulations demonstrate that L4V consistently outperforms representative baselines, including a genetic algorithm, DQN, A2C, and DDPG, in mission completion time, average transmission rate, and training cost