🤖 AI Summary
Existing infrared–visible image fusion methods struggle to simultaneously achieve high fusion quality and strong downstream task performance. To address this, we propose OCCO, an LVM-guided fusion framework that introduces large vision model (LVM)-based semantic distillation for object-aware perception and contextual modeling—marking the first such application in fusion. OCCO features a dual-path contrastive learning mechanism explicitly preserving target integrity within the fused feature space, alongside a cross-modal feature interaction fusion network designed to mitigate modality conflicts. Evaluated on four benchmark datasets, OCCO consistently outperforms eight state-of-the-art methods, achieving significant gains in both fusion quality metrics (e.g., PSNR, SSIM) and downstream object detection performance (up to +3.2% mAP). The framework thus enables synergistic optimization of high-fidelity fusion and robust task generalization.
📝 Abstract
Image fusion is a crucial technique in the field of computer vision, and its goal is to generate high-quality fused images and improve the performance of downstream tasks. However, existing fusion methods struggle to balance these two factors. Achieving high quality in fused images may result in lower performance in downstream visual tasks, and vice versa. To address this drawback, a novel LVM (large vision model)-guided fusion framework with Object-aware and Contextual COntrastive learning is proposed, termed as OCCO. The pre-trained LVM is utilized to provide semantic guidance, allowing the network to focus solely on fusion tasks while emphasizing learning salient semantic features in form of contrastive learning. Additionally, a novel feature interaction fusion network is also designed to resolve information conflicts in fusion images caused by modality differences. By learning the distinction between positive samples and negative samples in the latent feature space (contextual space), the integrity of target information in fused image is improved, thereby benefiting downstream performance. Finally, compared with eight state-of-the-art methods on four datasets, the effectiveness of the proposed method is validated, and exceptional performance is also demonstrated on downstream visual task.