MambaTrans: Multimodal Fusion Image Translation via Large Language Model Priors for Downstream Visual Tasks

📅 2025-08-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Infrared–visible image fusion often degrades downstream detection/segmentation performance due to modality distribution mismatch between fused and visible-light images. Method: We propose MambaTrans, a parameter-free modality translation framework that maps fused images into the visible-light domain. Its core innovation is a 3D selective scanning state-space module integrating text descriptions and semantic masks—generated by a multimodal large language model—as cross-modal priors, thereby enhancing modality alignment while preserving long-range dependencies. The method is end-to-end optimized using only fused images as input, requiring no architectural modifications to pretrained downstream models. Contribution/Results: Evaluated on multiple public benchmarks, MambaTrans significantly improves detection and segmentation accuracy of YOLOv8 and Mask R-CNN on fused inputs. Crucially, it achieves state-of-the-art performance without any parameter fine-tuning—i.e., zero-shot adaptation.

Technology Category

Application Category

📝 Abstract
The goal of multimodal image fusion is to integrate complementary information from infrared and visible images, generating multimodal fused images for downstream tasks. Existing downstream pre-training models are typically trained on visible images. However, the significant pixel distribution differences between visible and multimodal fusion images can degrade downstream task performance, sometimes even below that of using only visible images. This paper explores adapting multimodal fused images with significant modality differences to object detection and semantic segmentation models trained on visible images. To address this, we propose MambaTrans, a novel multimodal fusion image modality translator. MambaTrans uses descriptions from a multimodal large language model and masks from semantic segmentation models as input. Its core component, the Multi-Model State Space Block, combines mask-image-text cross-attention and a 3D-Selective Scan Module, enhancing pure visual capabilities. By leveraging object detection prior knowledge, MambaTrans minimizes detection loss during training and captures long-term dependencies among text, masks, and images. This enables favorable results in pre-trained models without adjusting their parameters. Experiments on public datasets show that MambaTrans effectively improves multimodal image performance in downstream tasks.
Problem

Research questions and friction points this paper is trying to address.

Adapt multimodal fused images to visible-trained models
Reduce performance gap in downstream visual tasks
Leverage LLM priors for improved image translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multimodal LLM descriptions and segmentation masks
Integrates mask-image-text cross-attention mechanism
Minimizes detection loss with prior knowledge
🔎 Similar Papers
No similar papers found.
Yushen Xu
Yushen Xu
Foshan University
Image Fusion,Computer Vision
X
Xiaosong Li
School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China; Guangdong-HongKong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
Z
Zhenyu Kuang
Guangdong-HongKong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
X
Xiaoqi Cheng
Guangdong-HongKong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan, China
H
Haishu Tan
School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
Huafeng Li
Huafeng Li
KUST
Computer VisionPattern RecognitionMachine Learning