🤖 AI Summary
Low-altitude UAV 3D perception faces critical challenges including scarcity of real-world data, high annotation costs for multimodal sensors, and difficulty in cross-modal alignment. To address these, this paper introduces UAV-MM3D—the first multimodal synthetic benchmark tailored for low-altitude UAVs. Leveraging high-fidelity physics-based simulation, UAV-MM3D synchronously generates RGB, infrared, LiDAR, radar, and event-camera data across diverse scenes and weather conditions, yielding a large-scale synthetic dataset of 400K frames with 2D/3D bounding boxes, 6-DoF pose annotations, and instance-level labels. We further propose LGFusionNet—a LiDAR-guided multimodal fusion network—and a trajectory prediction model, significantly reducing reliance on real-world data. The benchmark and baseline models establish a standardized evaluation platform for 3D detection, 6-DoF pose estimation, and trajectory prediction, thereby enhancing algorithm generalizability and reproducibility.
📝 Abstract
Accurate perception of UAVs in complex low-altitude environments is critical for airspace security and related intelligent systems. Developing reliable solutions requires large-scale, accurately annotated, and multimodal data. However, real-world UAV data collection faces inherent constraints due to airspace regulations, privacy concerns, and environmental variability, while manual annotation of 3D poses and cross-modal correspondences is time-consuming and costly. To overcome these challenges, we introduce UAV-MM3D, a high-fidelity multimodal synthetic dataset for low-altitude UAV perception and motion understanding. It comprises 400K synchronized frames across diverse scenes (urban areas, suburbs, forests, coastal regions) and weather conditions (clear, cloudy, rainy, foggy), featuring multiple UAV models (micro, small, medium-sized) and five modalities - RGB, IR, LiDAR, Radar, and DVS (Dynamic Vision Sensor). Each frame provides 2D/3D bounding boxes, 6-DoF poses, and instance-level annotations, enabling core tasks related to UAVs such as 3D detection, pose estimation, target tracking, and short-term trajectory forecasting. We further propose LGFusionNet, a LiDAR-guided multimodal fusion baseline, and a dedicated UAV trajectory prediction baseline to facilitate benchmarking. With its controllable simulation environment, comprehensive scenario coverage, and rich annotations, UAV3D offers a public benchmark for advancing 3D perception of UAVs.