🤖 AI Summary
This work addresses the challenge of effectively fusing heterogeneous thermal and visible-spectrum sensors for reliable drone detection, given their disparities in resolution, viewpoint, and field of view. To this end, two multimodal fusion strategies are proposed: RGIF, which leverages ECC registration and guided filtering, and RGMAF, which integrates affine/optical-flow alignment with a reliability-weighted attention mechanism. By incorporating alignment-awareness and reliability gating, the methods adaptively combine the high contrast of thermal imagery with the rich detail of visible data, thereby mitigating poor spatial correspondence and annotation inconsistencies. Evaluated on the MMFW-UAV dataset, RGIF achieves a mAP@50 of 97.65%, while RGMAF attains the highest recall of 98.64%, both significantly outperforming single-modality baselines.
📝 Abstract
Reliable unmanned aerial vehicle (UAV) detection is critical for autonomous airspace monitoring but remains challenging when integrating sensor streams that differ substantially in resolution, perspective, and field of view. Conventional fusion methods-such as wavelet-, Laplacian-, and decision-level approaches-often fail to preserve spatial correspondence across modalities and suffer from annotation of inconsistencies, limiting their robustness in real-world settings. This study introduces two fusion strategies, Registration-aware Guided Image Fusion (RGIF) and Reliability-Gated Modality-Attention Fusion (RGMAF), designed to overcome these limitations. RGIF employs Enhanced Correlation Coefficient (ECC)-based affine registration combined with guided filtering to maintain thermal saliency while enhancing structural detail. RGMAF integrates affine and optical-flow registration with a reliability-weighted attention mechanism that adaptively balances thermal contrast and visual sharpness. Experiments were conducted on the Multi-Sensor and Multi-View Fixed-Wing (MMFW)-UAV dataset comprising 147,417 annotated air-to-air frames collected from infrared, wide-angle, and zoom sensors. Among single-modality detectors, YOLOv10x demonstrated the most stable cross-domain performance and was selected as the detection backbone for evaluating fused imagery. RGIF improved the visual baseline by 2.13% mAP@50 (achieving 97.65%), while RGMAF attained the highest recall of 98.64%. These findings show that registration-aware and reliability-adaptive fusion provides a robust framework for integrating heterogeneous modalities, substantially enhancing UAV detection performance in multimodal environments.