🤖 AI Summary
Existing visible-infrared image fusion (VIF) methods primarily optimize fusion quality, often neglecting downstream task performance, while RGB-T crowd counting remains decoupled from VIF modeling. To bridge this gap, we propose FusionCounting—the first framework that jointly embeds crowd counting into the VIF pipeline, using density maps as semantic guidance for multi-task co-optimization. Our method introduces dynamic loss weighting and adversarial training to enable bidirectional mutual enhancement between fusion and counting, thereby improving both fusion fidelity and counting accuracy—especially in dense scenes. Extensive experiments on public RGB-T datasets demonstrate state-of-the-art performance: FusionCounting achieves superior fusion metrics (e.g., EN, SSIM) and significantly lower counting errors (MAE/MSE) compared to existing methods. Moreover, it exhibits enhanced model robustness and generalization across diverse lighting and occlusion conditions.
📝 Abstract
Most visible and infrared image fusion (VIF) methods focus primarily on optimizing fused image quality. Recent studies have begun incorporating downstream tasks, such as semantic segmentation and object detection, to provide semantic guidance for VIF. However, semantic segmentation requires extensive annotations, while object detection, despite reducing annotation efforts compared with segmentation, faces challenges in highly crowded scenes due to overlapping bounding boxes and occlusion. Moreover, although RGB-T crowd counting has gained increasing attention in recent years, no studies have integrated VIF and crowd counting into a unified framework. To address these challenges, we propose FusionCounting, a novel multi-task learning framework that integrates crowd counting into the VIF process. Crowd counting provides a direct quantitative measure of population density with minimal annotation, making it particularly suitable for dense scenes. Our framework leverages both input images and population density information in a mutually beneficial multi-task design. To accelerate convergence and balance tasks contributions, we introduce a dynamic loss function weighting strategy. Furthermore, we incorporate adversarial training to enhance the robustness of both VIF and crowd counting, improving the model's stability and resilience to adversarial attacks. Experimental results on public datasets demonstrate that FusionCounting not only enhances image fusion quality but also achieves superior crowd counting performance.