Transformer based Collaborative Reinforcement Learning for Fluid Antenna System (FAS)-enabled 3D UAV Positioning

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of real-time, high-precision 3D localization for mobile target unmanned aerial vehicles (UAVs). We propose a novel framework integrating Fluid Antenna Systems (FAS) with multi-agent collaborative optimization. Specifically, we jointly optimize the trajectory of controlled UAVs and the FAS port selection of passive UAVs, enabling decentralized decision-making via an attention-enhanced recurrent multi-agent reinforcement learning (MARL) architecture. Our approach leverages Transformer-based attention to improve global Q-function approximation, employs RNNs to model temporal dependencies, and fuses FAS-reflected signals with distance estimates to achieve accurate 3D localization. Experimental results demonstrate that our method reduces average localization error by 17.5% compared to VD-MARL and by 58.5% relative to a non-FAS baseline, significantly enhancing real-time localization performance in dynamic environments.

Technology Category

Application Category

📝 Abstract
In this paper, a novel Three dimensional (3D) positioning framework of fluid antenna system (FAS)-enabled unmanned aerial vehicles (UAVs) is developed. In the proposed framework, a set of controlled UAVs cooperatively estimate the real-time 3D position of a target UAV. Here, the active UAV transmits a measurement signal to the passive UAVs via the reflection from the target UAV. Each passive UAV estimates the distance of the active-target-passive UAV link and selects an antenna port to share the distance information with the base station (BS) that calculates the real-time position of the target UAV. As the target UAV is moving due to its task operation, the controlled UAVs must optimize their trajectories and select optimal antenna port, aiming to estimate the real-time position of the target UAV. We formulate this problem as an optimization problem to minimize the target UAV positioning error via optimizing the trajectories of all controlled UAVs and antenna port selection of passive UAVs. Here, an attention-based recurrent multi-agent reinforcement learning (AR-MARL) scheme is proposed, which enables each controlled UAV to use the local Q function to determine its trajectory and antenna port while optimizing the target UAV positioning performance without knowing the trajectories and antenna port selections of other controlled UAVs. Different from current MARL methods, the proposed method uses a recurrent neural network (RNN) that incorporates historical state-action pairs of each controlled UAV, and an attention mechanism to analyze the importance of these historical state-action pairs, thus improving the global Q function approximation accuracy and the target UAV positioning accuracy. Simulation results show that the proposed AR-MARL scheme can reduce the average positioning error by up to 17.5% and 58.5% compared to the VD-MARL scheme and the proposed method without FAS.
Problem

Research questions and friction points this paper is trying to address.

Estimating real-time 3D position of moving target UAV
Optimizing trajectories and antenna port selection for UAVs
Improving positioning accuracy via attention-based reinforcement learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Transformer-based reinforcement learning for UAV positioning
Attention mechanism enhances historical state-action analysis
Recurrent neural network improves Q function accuracy
🔎 Similar Papers
No similar papers found.
X
Xiaoren Xu
Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, 33146, USA
H
Hao Xu
Department of Electronic and Electrical Engineering, University College London, London, United Kingdom
Dongyu Wei
Dongyu Wei
University of Miami
Wireless communicationOptimizationMachine Learning
Walid Saad
Walid Saad
Professor, Electrical and Computer Engineering, Virginia Tech
6Gmachine learningsemantic communicationsquantum communicationscyber-physical systems
M
Mehdi Bennis
Centre for Wireless Communications, University of Oulu, 90014 Oulu, Finland
Mingzhe Chen
Mingzhe Chen
Assistant Professor, Electrical and Computer Engineering Department, University of Miami
Machine learningdigital network twinsunmanned aerial vehiclessemantic communications.