🤖 AI Summary
Infrared–visible image fusion often degrades downstream detection/segmentation performance due to modality distribution mismatch between fused and visible-light images. Method: We propose MambaTrans, a parameter-free modality translation framework that maps fused images into the visible-light domain. Its core innovation is a 3D selective scanning state-space module integrating text descriptions and semantic masks—generated by a multimodal large language model—as cross-modal priors, thereby enhancing modality alignment while preserving long-range dependencies. The method is end-to-end optimized using only fused images as input, requiring no architectural modifications to pretrained downstream models. Contribution/Results: Evaluated on multiple public benchmarks, MambaTrans significantly improves detection and segmentation accuracy of YOLOv8 and Mask R-CNN on fused inputs. Crucially, it achieves state-of-the-art performance without any parameter fine-tuning—i.e., zero-shot adaptation.
📝 Abstract
The goal of multimodal image fusion is to integrate complementary information from infrared and visible images, generating multimodal fused images for downstream tasks. Existing downstream pre-training models are typically trained on visible images. However, the significant pixel distribution differences between visible and multimodal fusion images can degrade downstream task performance, sometimes even below that of using only visible images. This paper explores adapting multimodal fused images with significant modality differences to object detection and semantic segmentation models trained on visible images. To address this, we propose MambaTrans, a novel multimodal fusion image modality translator. MambaTrans uses descriptions from a multimodal large language model and masks from semantic segmentation models as input. Its core component, the Multi-Model State Space Block, combines mask-image-text cross-attention and a 3D-Selective Scan Module, enhancing pure visual capabilities. By leveraging object detection prior knowledge, MambaTrans minimizes detection loss during training and captures long-term dependencies among text, masks, and images. This enables favorable results in pre-trained models without adjusting their parameters. Experiments on public datasets show that MambaTrans effectively improves multimodal image performance in downstream tasks.