🤖 AI Summary
Addressing the challenges of jointly optimizing energy efficiency and communication coverage in UAV-enabled IoT data collection, as well as the instability of offline reinforcement learning (RL) training and its reliance on high-quality expert demonstrations, this paper proposes the LLM-CRDT framework. It integrates large language model (LLM)-derived prior knowledge with a Critic-Regularized Decision Transformer (CRDT), enabling few-shot efficient transfer via Low-Rank Adaptation (LoRA). The framework jointly optimizes UAV trajectory planning and wireless resource allocation to enhance sequential decision-making capability and value-function-guided policy accuracy. Methodologically, it unifies offline RL, decision transformers, linear programming, and parameter-efficient fine-tuning. Simulation results demonstrate a 36.7% improvement in energy efficiency over state-of-the-art baselines, alongside robust training stability and strong generalization performance—even under limited expert demonstration data.
📝 Abstract
The deployment of unmanned aerial vehicles (UAVs) for reliable and energy-efficient data collection from spatially distributed devices holds great promise in supporting diverse Internet of Things (IoT) applications. Nevertheless, the limited endurance and communication range of UAVs necessitate intelligent trajectory planning. While reinforcement learning (RL) has been extensively explored for UAV trajectory optimization, its interactive nature entails high costs and risks in real-world environments. Offline RL mitigates these issues but remains susceptible to unstable training and heavily rely on expert-quality datasets. To address these challenges, we formulate a joint UAV trajectory planning and resource allocation problem to maximize energy efficiency of data collection. The resource allocation subproblem is first transformed into an equivalent linear programming formulation and solved optimally with polynomial-time complexity. Then, we propose a large language model (LLM)-empowered critic-regularized decision transformer (DT) framework, termed LLM-CRDT, to learn effective UAV control policies. In LLM-CRDT, we incorporate critic networks to regularize the DT model training, thereby integrating the sequence modeling capabilities of DT with critic-based value guidance to enable learning effective policies from suboptimal datasets. Furthermore, to mitigate the data-hungry nature of transformer models, we employ a pre-trained LLM as the transformer backbone of the DT model and adopt a parameter-efficient fine-tuning strategy, i.e., LoRA, enabling rapid adaptation to UAV control tasks with small-scale dataset and low computational overhead. Extensive simulations demonstrate that LLM-CRDT outperforms benchmark online and offline RL methods, achieving up to 36.7% higher energy efficiency than the current state-of-the-art DT approaches.