UEMM-Air: A Synthetic Multi-modal Dataset for Unmanned Aerial Vehicle Object Detection

📅 2024-06-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address key bottlenecks in UAV target detection—namely, the scarcity of diverse modalities, high construction costs, and low annotation accuracy in existing multimodal datasets—this paper introduces UEMM-Air, the first synthetic multimodal dataset specifically designed for UAVs. It comprises 120,000 precisely aligned image pairs across six modalities, spanning heterogeneous flight scenarios, viewpoints, and altitudes. We propose an automated aerial data acquisition framework built on Unreal Engine, integrating rule-based flight logic, geometry-aware heuristic annotation, and cross-modal text generation to enable cost-effective, high-fidelity multimodal synthesis and alignment. UEMM-Air establishes a new benchmark for UAV multimodal learning: models pretrained on it achieve significant performance gains over state-of-the-art methods on downstream detection tasks. The dataset is publicly released to foster advancement in UAV-oriented multimodal perception research.

Technology Category

Application Category

📝 Abstract
The development of multi-modal learning for Unmanned Aerial Vehicles (UAVs) typically relies on a large amount of pixel-aligned multi-modal image data. However, existing datasets face challenges such as limited modalities, high construction costs, and imprecise annotations. To this end, we propose a synthetic multi-modal UAV-based multi-task dataset, UEMM-Air. Specifically, we simulate various UAV flight scenarios and object types using the Unreal Engine (UE). Then we design the UAV's flight logic to automatically collect data from different scenarios, perspectives, and altitudes. Furthermore, we propose a novel heuristic automatic annotation algorithm to generate accurate object detection labels. Finally, we utilize labels to generate text descriptions of images to make our UEMM-Air support more cross-modality tasks. In total, our UEMM-Air consists of 120k pairs of images with 6 modalities and precise annotations. Moreover, we conduct numerous experiments and establish new benchmark results on our dataset. We also found that models pre-trained on UEMM-Air exhibit better performance on downstream tasks compared to other similar datasets. The dataset is publicly available (https://github.com/1e12Leon/UEMM-Air) to support the research of multi-modal tasks on UAVs.
Problem

Research questions and friction points this paper is trying to address.

Multi-modal UAV object detection
Synthetic dataset creation
Accurate automatic annotation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic multi-modal UAV dataset
Unreal Engine simulation
Heuristic automatic annotation algorithm
🔎 Similar Papers
No similar papers found.
F
Fan Liu
Hohai University
L
Liang Yao
Hohai University
S
Shengxiang Xu
Hohai University
C
Chuanyi Zhang
Hohai University
Xing Ma
Xing Ma
Meituan, NLP engineer
Dialog SystemLarge Language ModelConversation Analysis
Jianyu Jiang
Jianyu Jiang
Bytedance
ML SystemHeterogeneous ComputingDistributed SystemSecurity & Privacy
Z
Zequan Wang
S
Shimin Di
J
Jun Zhou