π€ AI Summary
To address the lack of dedicated benchmarks for three-dimensional (3D) perception of low-altitude aircraft (LAAs), this work introduces LAA3Dβthe first large-scale, multi-scenario 3D perception dataset for LAAs, comprising 15,000 real-world images and 600,000 high-fidelity synthetic frames. It supports three core tasks: 3D object detection, multi-object tracking (MOT), and 6-degree-of-freedom (6-DoF) pose estimation. We propose the first unified multi-task evaluation benchmark tailored to LAAs and design MonoLAA, a monocular 3D detector incorporating zoom-camera multi-focal-length inputs as a baseline. Experiments demonstrate that pretraining on synthetic data significantly enhances performance on real-world data, enabling strong simulation-to-reality (sim-to-real) transfer. The baseline achieves robust 3D localization and cross-domain generalization on real data. This work establishes a foundational dataset and a methodological paradigm for intelligent low-altitude perception research.
π Abstract
Perception of Low-Altitude Aircraft (LAA) in 3D space enables precise 3D object localization and behavior understanding. However, datasets tailored for 3D LAA perception remain scarce. To address this gap, we present LAA3D, a large-scale dataset designed to advance 3D detection and tracking of low-altitude aerial vehicles. LAA3D contains 15,000 real images and 600,000 synthetic frames, captured across diverse scenarios, including urban and suburban environments. It covers multiple aerial object categories, including electric Vertical Take-Off and Landing (eVTOL) aircraft, Micro Aerial Vehicles (MAVs), and Helicopters. Each instance is annotated with 3D bounding box, class label, and instance identity, supporting tasks such as 3D object detection, 3D multi-object tracking (MOT), and 6-DoF pose estimation. Besides, we establish the LAA3D Benchmark, integrating multiple tasks and methods with unified evaluation protocols for comparison. Furthermore, we propose MonoLAA, a monocular 3D detection baseline, achieving robust 3D localization from zoom cameras with varying focal lengths. Models pretrained on synthetic images transfer effectively to real-world data with fine-tuning, demonstrating strong sim-to-real generalization. Our LAA3D provides a comprehensive foundation for future research in low-altitude 3D object perception.