🤖 AI Summary
This work addresses the high data acquisition cost and limited generalization of traditional infrared and visible image fusion methods, which rely on strictly aligned paired datasets. To overcome these limitations, the authors propose two novel training paradigms—Arbitrary Paired Training Paradigm (APTP) and Unpaired Training Paradigm (UPTP)—and develop a lightweight, end-to-end fusion framework that integrates CNN, Transformer, and GAN architectures. A new loss function is introduced to enhance cross-modal modeling capability. The study provides the first systematic validation of unpaired and arbitrarily paired training for image fusion, demonstrating that models trained with as little as 1% of unaligned data can match or even surpass the performance of those trained on 100 times more strictly aligned paired data, thereby substantially reducing data dependency while improving robustness.
📝 Abstract
Infrared and visible image fusion(IVIF) combines complementary modalities while preserving natural textures and salient thermal signatures. Existing solutions predominantly rely on extensive sets of rigidly aligned image pairs for training. However, acquiring such data is often impractical due to the costly and labour-intensive alignment process. Besides, maintaining a rigid pairing setting during training restricts the volume of cross-modal relationships, thereby limiting generalisation performance. To this end, this work challenges the necessity of Strictly Paired Training Paradigm (SPTP) by systematically investigating UnPaired and Arbitrarily Paired Training Paradigms (UPTP and APTP) for high-performance IVIF. We establish a theoretical objective of APTP, reflecting the complementary nature between UPTP and SPTP. More importantly, we develop a practical framework capable of significantly enriching cross-modal relationships even with severely limited and unaligned training data. To validate our propositions, three end-to-end lightweight baselines, alongside a set of innovative loss functions, are designed to cover three classic frameworks (CNN, Transformer, GAN). Comprehensive experiments demonstrate that the proposed APTP and UPTP are feasible and capable of training models on a severely limited and content-inconsistent infrared and visible dataset, achieving performance comparable to that of a dataset 100$\times$ larger in SPTP. This finding fundamentally alleviates the cost and difficulty of data collection while enhancing model robustness from the data perspective, delivering a feasible solution for IVIF studies. The code is available at \href{https://github.com/yanglinDeng/IVIF_unpair}{\textcolor{blue}{https://github.com/yanglinDeng/IVIF\_unpair}}.