Efficient UAV trajectory prediction: A multi-modal deep diffusion framework

📅 2026-01-26
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of insufficient accuracy in predicting trajectories of unauthorized drones in low-altitude airspace, which stems from the limited information provided by single-sensor systems. To overcome this limitation, the authors propose a multimodal deep diffusion framework that fuses point clouds from LiDAR and millimeter-wave radar. The approach employs structurally aligned dual-branch encoders to extract modality-specific features and introduces a bidirectional cross-attention mechanism to achieve semantic alignment and complementary fusion of geometric structures and dynamic reflectivity characteristics. A tailored loss function and post-processing strategy are further integrated to enhance prediction performance. Evaluated on the MMAUD dataset, the proposed method achieves a 40% improvement in trajectory prediction accuracy over baseline models, demonstrating the effectiveness and practicality of the multimodal fusion strategy.

Technology Category

Application Category

📝 Abstract
To meet the requirements for managing unauthorized UAVs in the low-altitude economy, a multi-modal UAV trajectory prediction method based on the fusion of LiDAR and millimeter-wave radar information is proposed. A deep fusion network for multi-modal UAV trajectory prediction, termed the Multi-Modal Deep Fusion Framework, is designed. The overall architecture consists of two modality-specific feature extraction networks and a bidirectional cross-attention fusion module, aiming to fully exploit the complementary information of LiDAR and radar point clouds in spatial geometric structure and dynamic reflection characteristics. In the feature extraction stage, the model employs independent but structurally identical feature encoders for LiDAR and radar. After feature extraction, the model enters the Bidirectional Cross-Attention Mechanism stage to achieve information complementarity and semantic alignment between the two modalities. To verify the effectiveness of the proposed model, the MMAUD dataset used in the CVPR 2024 UG2+ UAV Tracking and Pose-Estimation Challenge is adopted as the training and testing dataset. Experimental results show that the proposed multi-modal fusion model significantly improves trajectory prediction accuracy, achieving a 40% improvement compared to the baseline model. In addition, ablation experiments are conducted to demonstrate the effectiveness of different loss functions and post-processing strategies in improving model performance. The proposed model can effectively utilize multi-modal data and provides an efficient solution for unauthorized UAV trajectory prediction in the low-altitude economy.
Problem

Research questions and friction points this paper is trying to address.

UAV trajectory prediction
unauthorized UAV
low-altitude economy
multi-modal sensing
LiDAR and radar fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

multi-modal fusion
trajectory prediction
cross-attention mechanism
LiDAR-radar fusion
UAV tracking
🔎 Similar Papers
No similar papers found.
Yuan Gao
Yuan Gao
Assistant Professor, Shanghai University
Integrated Sensing & ComChannel ExtrapolationGenerative AI
Xinyu Guo
Xinyu Guo
Samsung Research America
AIcomputer visionmachine learningmedical image analysis
W
Wenjing Xie
School of Communication and Information Engineering, Shanghai University, Shanghai, 200444
Z
Zifan Wang
College of Computer and Information Science, Southwest University, Chongqing, 400715
H
Hongwen Yu
School of Communication and Information Engineering, Shanghai University, Shanghai, 200444
G
Gongyang Li
School of Communication and Information Engineering, Shanghai University, Shanghai, 200444
Shugong Xu
Shugong Xu
Professor at Xi'an Jiaotong-Liverpool University, IEEE Fellow
Machine LearningPattern RecognitionWireless Systems