Entity-Guided Multi-Task Learning for Infrared and Visible Image Fusion

πŸ“… 2026-01-05
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the limitations of existing text-driven infrared and visible image fusion methods, which rely on sentence-level descriptions prone to semantic noise and insufficient for capturing deep semantics. To overcome these issues, the authors propose an entity-guided multi-task learning framework that leverages entity-level text extracted from image captions as pseudo-labels. The framework integrates image fusion and multi-label classification in a parallel architecture and introduces an entity-guided cross-modal interaction module to achieve fine-grained semantic alignment. Key contributions include an entity-level text denoising mechanism, an entity-based multi-task supervision strategy, and a novel cross-modal feature interaction design. Additionally, the authors release entity-annotated versions of four benchmark datasetsβ€”TNO, RoadScene, M3FD, and MSRS. Experimental results demonstrate that the proposed method significantly outperforms state-of-the-art approaches in preserving salient targets, texture details, and semantic consistency.

Technology Category

Application Category

πŸ“ Abstract
Existing text-driven infrared and visible image fusion approaches often rely on textual information at the sentence level, which can lead to semantic noise from redundant text and fail to fully exploit the deeper semantic value of textual information. To address these issues, we propose a novel fusion approach named Entity-Guided Multi-Task learning for infrared and visible image fusion (EGMT). Our approach includes three key innovative components: (i) A principled method is proposed to extract entity-level textual information from image captions generated by large vision-language models, eliminating semantic noise from raw text while preserving critical semantic information; (ii) A parallel multi-task learning architecture is constructed, which integrates image fusion with a multi-label classification task. By using entities as pseudo-labels, the multi-label classification task provides semantic supervision, enabling the model to achieve a deeper understanding of image content and significantly improving the quality and semantic density of the fused image; (iii) An entity-guided cross-modal interactive module is also developed to facilitate the fine-grained interaction between visual and entity-level textual features, which enhances feature representation by capturing cross-modal dependencies at both inter-visual and visual-entity levels. To promote the wide application of the entity-guided image fusion framework, we release the entity-annotated version of four public datasets (i.e., TNO, RoadScene, M3FD, and MSRS). Extensive experiments demonstrate that EGMT achieves superior performance in preserving salient targets, texture details, and semantic consistency, compared to the state-of-the-art methods. The code and dataset will be publicly available at https://github.com/wyshao-01/EGMT.
Problem

Research questions and friction points this paper is trying to address.

infrared and visible image fusion
text-driven fusion
semantic noise
entity-level semantics
multi-task learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

entity-guided fusion
multi-task learning
infrared and visible image fusion
cross-modal interaction
semantic supervision
πŸ”Ž Similar Papers
No similar papers found.