Integrated Sensing and Communications for Low-Altitude Economy: A Deep Reinforcement Learning Approach

📅 2024-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual requirements of high-speed communication and safe collision avoidance for unmanned aerial vehicles (UAVs) in low-altitude economy scenarios, this paper proposes a ground-base-station–UAV integrated sensing and communication (ISAC) system. We jointly optimize base station beamforming and UAV trajectory to maximize the total communication rate under constraints on sensing signal-to-noise ratio (SNR), collision avoidance, mission execution, and transmit power. Methodologically, we introduce DeepLSC—a novel deep reinforcement learning framework—featuring: (i) a constraint-aware noise exploration strategy ensuring feasible action selection; and (ii) a hierarchical experience replay mechanism with symmetric experience augmentation to enhance training efficiency and robustness. Simulation results demonstrate that DeepLSC achieves significant gains in total communication rate while strictly satisfying all constraints, accelerates convergence by 37%, and exhibits strong generalization across varying airspace densities and mission scales.

Technology Category

Application Category

📝 Abstract
This paper studies an integrated sensing and communications (ISAC) system for low-altitude economy (LAE), where a ground base station (GBS) provides communication and navigation services for authorized unmanned aerial vehicles (UAVs), while sensing the low-altitude airspace to monitor the unauthorized mobile target. The expected communication sum-rate over a given flight period is maximized by jointly optimizing the beamforming at the GBS and UAVs' trajectories, subject to the constraints on the average signal-to-noise ratio requirement for sensing, the flight mission and collision avoidance of UAVs, as well as the maximum transmit power at the GBS. Typically, this is a sequential decision-making problem with the given flight mission. Thus, we transform it to a specific Markov decision process (MDP) model called episode task. Based on this modeling, we propose a novel LAE-oriented ISAC scheme, referred to as Deep LAE-ISAC (DeepLSC), by leveraging the deep reinforcement learning (DRL) technique. In DeepLSC, a reward function and a new action selection policy termed constrained noise-exploration policy are judiciously designed to fulfill various constraints. To enable efficient learning in episode tasks, we develop a hierarchical experience replay mechanism, where the gist is to employ all experiences generated within each episode to jointly train the neural network. Besides, to enhance the convergence speed of DeepLSC, a symmetric experience augmentation mechanism, which simultaneously permutes the indexes of all variables to enrich available experience sets, is proposed. Simulation results demonstrate that compared with benchmarks, DeepLSC yields a higher sum-rate while meeting the preset constraints, achieves faster convergence, and is more robust against different settings.
Problem

Research questions and friction points this paper is trying to address.

Integrated Sensing and Communication (ISAC)
Unmanned Aerial Vehicle (UAV)
High-speed Communication and Obstacle Avoidance
Innovation

Methods, ideas, or system contributions that make the work stand out.

DeepLSC
Integrated Sensing and Communication (ISAC)
Deep Reinforcement Learning
🔎 Similar Papers
No similar papers found.
X
Xiaowen Ye
Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China
Y
Yuyi Mao
Department of Electrical and Electronic Engineering, The Hong Kong Polytechnic University, Hong Kong, China
Xianghao Yu
Xianghao Yu
Assistant Professor, City University of Hong Kong
Wireless CommunicationsSignal Processing
S
Shu Sun
Department of Electronic Engineering and the Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, Shanghai 200240, China
Liqun Fu
Liqun Fu
Full Professor, Xiamen University
wireless communication networks
J
Jie Xu
School of Science and Engineering (SSE), the Shenzhen Future Network of Intelligence Institute (FNii-Shenzhen), and the Guangdong Provincial Key Laboratory of Future Networks of Intelligence, The Chinese University of Hong Kong, Shenzhen, China