🤖 AI Summary
Low-altitude wireless networks (LAWNs) face dual challenges: severe security threats arising from low-altitude deployment, high mobility, and reliance on unlicensed spectrum, coupled with limited generalization capability of conventional AI models. To address these, we propose a large language model (LLM)-empowered reinforcement learning security framework. Specifically, the LLM automatically generates semantically enriched state representations, overcoming the limitations of handcrafted features; meanwhile, a context-aware intrinsic reward mechanism is designed to guide adaptive policy learning. By jointly optimizing state representation and policy execution, the framework enables real-time modeling and responsive mitigation of dynamic attacks. Extensive simulations in representative low-altitude communication scenarios demonstrate that our approach significantly outperforms baseline methods in attack detection accuracy and interference resilience, while enhancing model adaptability to unseen threats and improving decision interpretability.
📝 Abstract
Low-altitude wireless networks (LAWNs) have the potential to revolutionize communications by supporting a range of applications, including urban parcel delivery, aerial inspections and air taxis. However, compared with traditional wireless networks, LAWNs face unique security challenges due to low-altitude operations, frequent mobility and reliance on unlicensed spectrum, making it more vulnerable to some malicious attacks. In this paper, we investigate some large artificial intelligence model (LAM)-enabled solutions for secure communications in LAWNs. Specifically, we first explore the amplified security risks and important limitations of traditional AI methods in LAWNs. Then, we introduce the basic concepts of LAMs and delve into the role of LAMs in addressing these challenges. To demonstrate the practical benefits of LAMs for secure communications in LAWNs, we propose a novel LAM-based optimization framework that leverages large language models (LLMs) to generate enhanced state features on top of handcrafted representations, and to design intrinsic rewards accordingly, thereby improving reinforcement learning performance for secure communication tasks. Through a typical case study, simulation results validate the effectiveness of the proposed framework. Finally, we outline future directions for integrating LAMs into secure LAWN applications.