Beyond Strict Pairing: Arbitrarily Paired Training for High-Performance Infrared and Visible Image Fusion

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high data acquisition cost and limited generalization of traditional infrared and visible image fusion methods, which rely on strictly aligned paired datasets. To overcome these limitations, the authors propose two novel training paradigms—Arbitrary Paired Training Paradigm (APTP) and Unpaired Training Paradigm (UPTP)—and develop a lightweight, end-to-end fusion framework that integrates CNN, Transformer, and GAN architectures. A new loss function is introduced to enhance cross-modal modeling capability. The study provides the first systematic validation of unpaired and arbitrarily paired training for image fusion, demonstrating that models trained with as little as 1% of unaligned data can match or even surpass the performance of those trained on 100 times more strictly aligned paired data, thereby substantially reducing data dependency while improving robustness.

Technology Category

Application Category

📝 Abstract
Infrared and visible image fusion(IVIF) combines complementary modalities while preserving natural textures and salient thermal signatures. Existing solutions predominantly rely on extensive sets of rigidly aligned image pairs for training. However, acquiring such data is often impractical due to the costly and labour-intensive alignment process. Besides, maintaining a rigid pairing setting during training restricts the volume of cross-modal relationships, thereby limiting generalisation performance. To this end, this work challenges the necessity of Strictly Paired Training Paradigm (SPTP) by systematically investigating UnPaired and Arbitrarily Paired Training Paradigms (UPTP and APTP) for high-performance IVIF. We establish a theoretical objective of APTP, reflecting the complementary nature between UPTP and SPTP. More importantly, we develop a practical framework capable of significantly enriching cross-modal relationships even with severely limited and unaligned training data. To validate our propositions, three end-to-end lightweight baselines, alongside a set of innovative loss functions, are designed to cover three classic frameworks (CNN, Transformer, GAN). Comprehensive experiments demonstrate that the proposed APTP and UPTP are feasible and capable of training models on a severely limited and content-inconsistent infrared and visible dataset, achieving performance comparable to that of a dataset 100$\times$ larger in SPTP. This finding fundamentally alleviates the cost and difficulty of data collection while enhancing model robustness from the data perspective, delivering a feasible solution for IVIF studies. The code is available at \href{https://github.com/yanglinDeng/IVIF_unpair}{\textcolor{blue}{https://github.com/yanglinDeng/IVIF\_unpair}}.
Problem

Research questions and friction points this paper is trying to address.

Infrared and Visible Image Fusion
Strictly Paired Training
Unpaired Training
Arbitrarily Paired Training
Cross-modal Relationship
Innovation

Methods, ideas, or system contributions that make the work stand out.

Arbitrarily Paired Training
Infrared-Visible Image Fusion
Unpaired Learning
Cross-modal Relationship
Data-efficient Training
🔎 Similar Papers
No similar papers found.
Y
Yanglin Deng
School of Artificial Intelligence and Computer Science, Jiangnan University
Tianyang Xu
Tianyang Xu
Jiangnan University
visual trackingaction recognitionmulti-modal fusionmanifold learning
C
Chunyang Cheng
School of Artificial Intelligence and Computer Science, Jiangnan University
Hui Li
Hui Li
Jiangnan University
Informtion fusionmulti-modal processing
X
Xiao-jun Wu
School of Artificial Intelligence and Computer Science, Jiangnan University
Josef Kittler
Josef Kittler
University of Surrey
engineering