FusionCounting: Robust visible-infrared image fusion guided by crowd counting via multi-task learning

📅 2025-08-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing visible-infrared image fusion (VIF) methods primarily optimize fusion quality, often neglecting downstream task performance, while RGB-T crowd counting remains decoupled from VIF modeling. To bridge this gap, we propose FusionCounting—the first framework that jointly embeds crowd counting into the VIF pipeline, using density maps as semantic guidance for multi-task co-optimization. Our method introduces dynamic loss weighting and adversarial training to enable bidirectional mutual enhancement between fusion and counting, thereby improving both fusion fidelity and counting accuracy—especially in dense scenes. Extensive experiments on public RGB-T datasets demonstrate state-of-the-art performance: FusionCounting achieves superior fusion metrics (e.g., EN, SSIM) and significantly lower counting errors (MAE/MSE) compared to existing methods. Moreover, it exhibits enhanced model robustness and generalization across diverse lighting and occlusion conditions.

Technology Category

Application Category

📝 Abstract
Most visible and infrared image fusion (VIF) methods focus primarily on optimizing fused image quality. Recent studies have begun incorporating downstream tasks, such as semantic segmentation and object detection, to provide semantic guidance for VIF. However, semantic segmentation requires extensive annotations, while object detection, despite reducing annotation efforts compared with segmentation, faces challenges in highly crowded scenes due to overlapping bounding boxes and occlusion. Moreover, although RGB-T crowd counting has gained increasing attention in recent years, no studies have integrated VIF and crowd counting into a unified framework. To address these challenges, we propose FusionCounting, a novel multi-task learning framework that integrates crowd counting into the VIF process. Crowd counting provides a direct quantitative measure of population density with minimal annotation, making it particularly suitable for dense scenes. Our framework leverages both input images and population density information in a mutually beneficial multi-task design. To accelerate convergence and balance tasks contributions, we introduce a dynamic loss function weighting strategy. Furthermore, we incorporate adversarial training to enhance the robustness of both VIF and crowd counting, improving the model's stability and resilience to adversarial attacks. Experimental results on public datasets demonstrate that FusionCounting not only enhances image fusion quality but also achieves superior crowd counting performance.
Problem

Research questions and friction points this paper is trying to address.

Integrating crowd counting into visible-infrared image fusion
Addressing annotation challenges in dense crowded scenes
Enhancing robustness against adversarial attacks through multi-task learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-task learning integrating crowd counting with image fusion
Dynamic loss weighting strategy for task balance
Adversarial training for enhanced robustness and stability
🔎 Similar Papers
No similar papers found.
H
He Li
School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
X
Xinyu Liu
School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
W
Weihang Kong
School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, China
Xingchen Zhang
Xingchen Zhang
Senior Lecturer and Director of the Fusion Intelligence Lab, University of Exeter
Fusion IntelligenceHuman-centered AIEmbodied AIPrivacy-preserving AIMedical AI