🤖 AI Summary
To address key bottlenecks in UAV target detection—namely, the scarcity of diverse modalities, high construction costs, and low annotation accuracy in existing multimodal datasets—this paper introduces UEMM-Air, the first synthetic multimodal dataset specifically designed for UAVs. It comprises 120,000 precisely aligned image pairs across six modalities, spanning heterogeneous flight scenarios, viewpoints, and altitudes. We propose an automated aerial data acquisition framework built on Unreal Engine, integrating rule-based flight logic, geometry-aware heuristic annotation, and cross-modal text generation to enable cost-effective, high-fidelity multimodal synthesis and alignment. UEMM-Air establishes a new benchmark for UAV multimodal learning: models pretrained on it achieve significant performance gains over state-of-the-art methods on downstream detection tasks. The dataset is publicly released to foster advancement in UAV-oriented multimodal perception research.
📝 Abstract
The development of multi-modal learning for Unmanned Aerial Vehicles (UAVs) typically relies on a large amount of pixel-aligned multi-modal image data. However, existing datasets face challenges such as limited modalities, high construction costs, and imprecise annotations. To this end, we propose a synthetic multi-modal UAV-based multi-task dataset, UEMM-Air. Specifically, we simulate various UAV flight scenarios and object types using the Unreal Engine (UE). Then we design the UAV's flight logic to automatically collect data from different scenarios, perspectives, and altitudes. Furthermore, we propose a novel heuristic automatic annotation algorithm to generate accurate object detection labels. Finally, we utilize labels to generate text descriptions of images to make our UEMM-Air support more cross-modality tasks. In total, our UEMM-Air consists of 120k pairs of images with 6 modalities and precise annotations. Moreover, we conduct numerous experiments and establish new benchmark results on our dataset. We also found that models pre-trained on UEMM-Air exhibit better performance on downstream tasks compared to other similar datasets. The dataset is publicly available (https://github.com/1e12Leon/UEMM-Air) to support the research of multi-modal tasks on UAVs.