WAVE-DETR Multi-Modal Visible and Acoustic Real-Life Drone Detector

📅 2025-09-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the insufficient robustness of small unmanned aerial vehicle (UAV) detection in real-world scenarios, this paper proposes an RGB-acoustic multimodal detection framework. Methodologically, it unifies Deformable DETR (as the visual backbone) and Wav2Vec2 (as the acoustic encoder) for the first time, and designs four cross-modal fusion architectures—gated, linear, MLP-based, and cross-attention—to enable multiscale feature collaboration. Key contributions include: (1) the first end-to-end RGB-acoustic joint detection architecture; and (2) a novel gated fusion mechanism that significantly improves small-object detection performance. On the ARDrone dataset, gated fusion boosts the mAP for small UAVs by 11.1 percentage points (reaching 15.3%), and improves overall mAP by 3.27–5.84%. Moreover, the framework demonstrates markedly superior out-of-distribution generalization compared to unimodal baselines.

Technology Category

Application Category

📝 Abstract
We introduce a multi-modal WAVE-DETR drone detector combining visible RGB and acoustic signals for robust real-life UAV object detection. Our approach fuses visual and acoustic features in a unified object detector model relying on the Deformable DETR and Wav2Vec2 architectures, achieving strong performance under challenging environmental conditions. Our work leverage the existing Drone-vs-Bird dataset and the newly generated ARDrone dataset containing more than 7,500 synchronized images and audio segments. We show how the acoustic information is used to improve the performance of the Deformable DETR object detector on the real ARDrone dataset. We developed, trained and tested four different fusion configurations based on a gated mechanism, linear layer, MLP and cross attention. The Wav2Vec2 acoustic embeddings are fused with the multi resolution feature mappings of the Deformable DETR and enhance the object detection performance over all drones dimensions. The best performer is the gated fusion approach, which improves the mAP of the Deformable DETR object detector on our in-distribution and out-of-distribution ARDrone datasets by 11.1% to 15.3% for small drones across all IoU thresholds between 0.5 and 0.9. The mAP scores for medium and large drones are also enhanced, with overall gains across all drone sizes ranging from 3.27% to 5.84%.
Problem

Research questions and friction points this paper is trying to address.

Combining visible RGB and acoustic signals for drone detection
Improving object detection performance in challenging environmental conditions
Fusing visual and acoustic features using multiple fusion methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses visual and acoustic features using Deformable DETR
Integrates Wav2Vec2 acoustic embeddings with multi-resolution features
Employs gated fusion mechanism to enhance detection performance
🔎 Similar Papers
No similar papers found.
Razvan Stefanescu
Razvan Stefanescu
Meta, Virginia Tech, Peraton Labs
Deep LearningComputer VisionSensor FusionData AssimilationUncertainty Quantification
E
Ethan Oh
Peraton Labs, Basking Ridge, USA
R
Ruben Vazquez
Peraton Labs, Basking Ridge, USA
C
Chris Mesterharm
Peraton Labs, Basking Ridge, USA
C
Constantin Serban
Peraton Labs, Basking Ridge, USA
Ritu Chadha
Ritu Chadha
Vice President, Peraton Labs
Machine Learningcyber securitywireless networking