Robust Pedestrian Detection with Uncertain Modality

📅 2026-02-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the significant performance degradation of existing cross-modal pedestrian detection methods when any of the RGB, NIR, or TIR modalities is missing, a common challenge in real-world complex scenarios. To this end, the authors introduce the TRNT dataset, comprising 8,281 pixel-aligned triple-modality image triplets, and propose the Adaptive Uncertainty-aware Network (AUNet). AUNet incorporates a novel Unified Modality Verification and Semantic Refinement (UMVR) mechanism, coupled with a Modality-Aware Interaction (MAI) module and uncertainty-aware routing, to enable dynamic fusion and robust feature extraction across arbitrary modality combinations. Experimental results demonstrate that AUNet substantially enhances detection robustness under missing or varying modalities, effectively reducing missed detections and offering a reliable foundation for all-weather surveillance systems.

Technology Category

Application Category

📝 Abstract
Existing cross-modal pedestrian detection (CMPD) employs complementary information from RGB and thermal-infrared (TIR) modalities to detect pedestrians in 24h-surveillance systems.RGB captures rich pedestrian details under daylight, while TIR excels at night. However, TIR focuses primarily on the person's silhouette, neglecting critical texture details essential for detection. While the near-infrared (NIR) captures texture under low-light conditions, which effectively alleviates performance issues of RGB and detail loss in TIR, thereby reducing missed detections. To this end, we construct a new Triplet RGB-NIR-TIR (TRNT) dataset, comprising 8,281 pixel-aligned image triplets, establishing a comprehensive foundation for algorithmic research. However, due to the variable nature of real-world scenarios, imaging devices may not always capture all three modalities simultaneously. This results in input data with unpredictable combinations of modal types, which challenge existing CMPD methods that fail to extract robust pedestrian information under arbitrary input combinations, leading to significant performance degradation. To address these challenges, we propose the Adaptive Uncertainty-aware Network (AUNet) for accurately discriminating modal availability and fully utilizing the available information under uncertain inputs. Specifically, we introduce Unified Modality Validation Refinement (UMVR), which includes an uncertainty-aware router to validate modal availability and a semantic refinement to ensure the reliability of information within the modality. Furthermore, we design a Modality-Aware Interaction (MAI) module to adaptively activate or deactivate its internal interaction mechanisms per UMVR output, enabling effective complementary information fusion from available modalities.
Problem

Research questions and friction points this paper is trying to address.

cross-modal pedestrian detection
uncertain modality
robust detection
multi-modal fusion
modality availability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Uncertainty-aware Network
Triplet RGB-NIR-TIR dataset
Unified Modality Validation Refinement
Modality-Aware Interaction
Cross-modal Pedestrian Detection
🔎 Similar Papers
No similar papers found.
Q
Qian Bie
School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065 China
Xiao Wang
Xiao Wang
Wuhan University of Science and Technology
Computer vision
B
Bin Yang
National Engineering Research Center for Multimedia Software, School of Computer Science, Wuhan University, China
Z
Zhixi Yu
School of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan 430065 China
Jun Chen
Jun Chen
Electrical and Computer Engineering, McMaster University
Information TheoryMachine LearningNatural Language ProcessingWireless CommunicationSignal Processing
Xin Xu
Xin Xu
Professor of Wuhan University of Science and Technology
Person re-identificationLow-light image processingSalient object detection