MSCoTDet: Language-driven Multi-modal Fusion for Improved Multispectral Pedestrian Detection

📅 2024-03-22
🏛️ IEEE transactions on circuits and systems for video technology (Print)
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
To address modality bias in multispectral pedestrian detection—particularly the severe performance degradation under thermal occlusion caused by statistical data imbalance—this paper proposes a Language-driven Multimodal Fusion (LMF) framework. The core method introduces the novel Multispectral Chain-of-Thought (MSCoT) prompting strategy, which is the first to deeply integrate large language models’ (LLMs) reasoning capabilities into an RGB-thermal dual-stream detection architecture. MSCoT enables LLM-guided cross-modal feature alignment and joint language-vision reasoning, thereby enhancing modality complementarity and mitigating modality bias. Evaluated on mainstream benchmarks including MSRS, LMF significantly improves detection accuracy for small-scale and thermally occluded pedestrians, achieving mAP gains of 3.2–5.8 percentage points over prior methods. These results empirically validate the effectiveness and generalizability of LLM-augmented multispectral perception.

Technology Category

Application Category

📝 Abstract
Multispectral pedestrian detection is attractive for around-the-clock applications due to the complementary information between RGB and thermal modalities. However, current models often fail to detect pedestrians in certain cases (e.g., thermal-obscured pedestrians), particularly due to the modality bias learned from statistically biased datasets. In this paper, we investigate how to mitigate modality bias in multispectral pedestrian detection using Large Language Models (LLMs). Accordingly, we design a Multispectral Chain-of-Thought (MSCoT) prompting strategy, which prompts the LLM to perform multispectral pedestrian detection. Moreover, we propose a novel Multispectral Chain-of-Thought Detection (MSCoTDet) framework that integrates MSCoT prompting into multispectral pedestrian detection. To this end, we design a Language-driven Multi-modal Fusion (LMF) strategy that enables fusing the outputs of MSCoT prompting with the detection results of vision-based multispectral pedestrian detection models. Extensive experiments validate that MSCoTDet effectively mitigates modality biases and improves multispectral pedestrian detection.
Problem

Research questions and friction points this paper is trying to address.

Mitigating modality bias in multispectral pedestrian detection
Improving detection of thermal-obscured pedestrians using LLMs
Integrating language-driven fusion with vision-based detection models
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based Chain-of-Thought prompting strategy
Language-driven Multi-modal Fusion mechanism
Integration of linguistic reasoning with vision detection
🔎 Similar Papers
No similar papers found.
Taeheon Kim
Taeheon Kim
Seoul National University
continual learningmachine learningai safety
S
Sangyun Chung
Integrated Vision and Language Lab., School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
D
Damin Yeom
Integrated Vision and Language Lab., School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
Y
Youngjoon Yu
Integrated Vision and Language Lab., School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea
Hak Gu Kim
Hak Gu Kim
Department of Image Science and Arts, GSAIM, Chung-Ang University, Seoul, 06974, Republic of Korea
Y
Y. Ro
Integrated Vision and Language Lab., School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, 34141, Republic of Korea