🤖 AI Summary
This work addresses the significant performance degradation of existing cross-modal pedestrian detection methods when any of the RGB, NIR, or TIR modalities is missing, a common challenge in real-world complex scenarios. To this end, the authors introduce the TRNT dataset, comprising 8,281 pixel-aligned triple-modality image triplets, and propose the Adaptive Uncertainty-aware Network (AUNet). AUNet incorporates a novel Unified Modality Verification and Semantic Refinement (UMVR) mechanism, coupled with a Modality-Aware Interaction (MAI) module and uncertainty-aware routing, to enable dynamic fusion and robust feature extraction across arbitrary modality combinations. Experimental results demonstrate that AUNet substantially enhances detection robustness under missing or varying modalities, effectively reducing missed detections and offering a reliable foundation for all-weather surveillance systems.
📝 Abstract
Existing cross-modal pedestrian detection (CMPD) employs complementary information from RGB and thermal-infrared (TIR) modalities to detect pedestrians in 24h-surveillance systems.RGB captures rich pedestrian details under daylight, while TIR excels at night. However, TIR focuses primarily on the person's silhouette, neglecting critical texture details essential for detection. While the near-infrared (NIR) captures texture under low-light conditions, which effectively alleviates performance issues of RGB and detail loss in TIR, thereby reducing missed detections. To this end, we construct a new Triplet RGB-NIR-TIR (TRNT) dataset, comprising 8,281 pixel-aligned image triplets, establishing a comprehensive foundation for algorithmic research. However, due to the variable nature of real-world scenarios, imaging devices may not always capture all three modalities simultaneously. This results in input data with unpredictable combinations of modal types, which challenge existing CMPD methods that fail to extract robust pedestrian information under arbitrary input combinations, leading to significant performance degradation. To address these challenges, we propose the Adaptive Uncertainty-aware Network (AUNet) for accurately discriminating modal availability and fully utilizing the available information under uncertain inputs. Specifically, we introduce Unified Modality Validation Refinement (UMVR), which includes an uncertainty-aware router to validate modal availability and a semantic refinement to ensure the reliability of information within the modality. Furthermore, we design a Modality-Aware Interaction (MAI) module to adaptively activate or deactivate its internal interaction mechanisms per UMVR output, enabling effective complementary information fusion from available modalities.