Disentangle Object and Non-object Infrared Features via Language Guidance

📅 2026-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of infrared object detection, where low contrast and weak edges often result in insufficient discriminability of target features, particularly in complex scenes. To overcome this limitation, the study introduces— for the first time—a vision-language representation learning paradigm, proposing two key modules: Semantic Feature Alignment (SFA) and Object Feature Disentanglement (OFD). These modules leverage textual semantics to guide the disentanglement of target and non-target features, while incorporating correlation minimization to enhance the robustness of target representations. Evaluated on the M³FD and FLIR benchmarks, the proposed method achieves state-of-the-art performance with mAP scores of 83.7% and 86.1%, respectively, significantly outperforming existing approaches.

Technology Category

Application Category

📝 Abstract
Infrared object detection focuses on identifying and locating objects in complex environments (\eg, dark, snow, and rain) where visible imaging cameras are disabled by poor illumination. However, due to low contrast and weak edge information in infrared images, it is challenging to extract discriminative object features for robust detection. To deal with this issue, we propose a novel vision-language representation learning paradigm for infrared object detection. An additional textual supervision with rich semantic information is explored to guide the disentanglement of object and non-object features. Specifically, we propose a Semantic Feature Alignment (SFA) module to align the object features with the corresponding text features. Furthermore, we develop an Object Feature Disentanglement (OFD) module that disentangles text-aligned object features and non-object features by minimizing their correlation. Finally, the disentangled object features are entered into the detection head. In this manner, the detection performance can be remarkably enhanced via more discriminative and less noisy features. Extensive experimental results demonstrate that our approach achieves superior performance on two benchmarks: M\textsuperscript{3}FD (83.7\% mAP), FLIR (86.1\% mAP). Our code will be publicly available once the paper is accepted.
Problem

Research questions and friction points this paper is trying to address.

infrared object detection
low contrast
weak edge information
feature disentanglement
object detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

vision-language representation learning
feature disentanglement
infrared object detection
semantic feature alignment
object feature disentanglement
🔎 Similar Papers
No similar papers found.