🤖 AI Summary
This paper addresses active eavesdropping and jamming threats posed by malicious unmanned aerial vehicles (M-UAVs) in dynamic integrated sensing and communication (ISAC) systems for 6G networks.
Method: We formulate a non-cooperative Stackelberg game between the base station (BS) and the M-UAV, jointly optimizing communication secrecy, radar sensing accuracy, and energy efficiency. We propose a novel SCA-DRL cooperative framework: the M-UAV acts as the leader, adaptively planning collision-avoidant trajectories; the BS serves as the follower, jointly optimizing full-duplex ISAC resource allocation. The framework integrates Successive Convex Approximation (SCA) with Deep Reinforcement Learning (DRL) to achieve low-complexity, robust dynamic equilibrium computation.
Results: Simulations demonstrate convergence to a stable Stackelberg equilibrium, reducing eavesdropping success probability significantly. The scheme guarantees a secrecy rate ≥2.5 bps/Hz, sensing error <0.8 m, and reduces network power consumption by 37%.
📝 Abstract
In this paper, we study a secure integrated sensing and communication (ISAC) system employing a full-duplex base station with sensing capabilities against a mobile proactive adversarial target$unicode{x2014}$a malicious unmanned aerial vehicle (M-UAV). We develop a game-theoretic model to enhance communication security, radar sensing accuracy, and power efficiency. The interaction between the legitimate network and the mobile adversary is formulated as a non-cooperative Stackelberg game (NSG), where the M-UAV acts as the leader and strategically adjusts its trajectory to improve its eavesdropping ability while conserving power and avoiding obstacles. In response, the legitimate network, acting as the follower, dynamically allocates resources to minimize network power usage while ensuring required secrecy rates and sensing performance. To address this challenging problem, we propose a low-complexity successive convex approximation (SCA) method for network resource optimization combined with a deep reinforcement learning (DRL) algorithm for adaptive M-UAV trajectory planning through sequential interactions and learning. Simulation results demonstrate the efficacy of the proposed method in addressing security challenges of dynamic ISAC systems in 6G, i.e., achieving a Stackelberg equilibrium with robust performance while mitigating the adversary's ability to intercept network signals.