MMIF-AMIN: Adaptive Loss-Driven Multi-Scale Invertible Dense Network for Multimodal Medical Image Fusion

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the challenge of jointly modeling distinctive and complementary information in multimodal medical image fusion. We propose an end-to-end invertible dense network framework. Methodologically, we design a Multi-scale Complementary Feature Extraction Module (MCFEM) to decouple cross-modal features; integrate a hybrid attention mechanism with a lightweight Transformer to enhance multi-scale contextual modeling; and formulate an adaptive loss function to optimize the fusion process. Evaluated on multiple standard benchmarks, our method consistently outperforms nine state-of-the-art approaches in both quantitative metrics (e.g., SSIM, EN, SD) and qualitative visual fidelity. Ablation studies validate the efficacy of each component, while cross-task transfer experiments demonstrate strong generalizability. The core contribution lies in achieving synergistic representation and high-fidelity fusion of structural, textural, and functional information.

Technology Category

Application Category

📝 Abstract
Multimodal medical image fusion (MMIF) aims to integrate images from different modalities to produce a comprehensive image that enhances medical diagnosis by accurately depicting organ structures, tissue textures, and metabolic information. Capturing both the unique and complementary information across multiple modalities simultaneously is a key research challenge in MMIF. To address this challenge, this paper proposes a novel image fusion method, MMIF-AMIN, which features a new architecture that can effectively extract these unique and complementary features. Specifically, an Invertible Dense Network (IDN) is employed for lossless feature extraction from individual modalities. To extract complementary information between modalities, a Multi-scale Complementary Feature Extraction Module (MCFEM) is designed, which incorporates a hybrid attention mechanism, convolutional layers of varying sizes, and Transformers. An adaptive loss function is introduced to guide model learning, addressing the limitations of traditional manually-designed loss functions and enhancing the depth of data mining. Extensive experiments demonstrate that MMIF-AMIN outperforms nine state-of-the-art MMIF methods, delivering superior results in both quantitative and qualitative analyses. Ablation experiments confirm the effectiveness of each component of the proposed method. Additionally, extending MMIF-AMIN to other image fusion tasks also achieves promising performance.
Problem

Research questions and friction points this paper is trying to address.

Integrate multimodal medical images for enhanced diagnosis
Extract unique and complementary features across modalities
Overcome limitations of traditional loss functions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Invertible Dense Network for lossless feature extraction
Multi-scale Complementary Feature Extraction Module
Adaptive loss function for enhanced data mining
🔎 Similar Papers
No similar papers found.
T
Tao Luo
College of Artificial Intelligence, Southwest University, Chongqing, 400715, P.R. China
Weihua Xu
Weihua Xu
Southwest University
Granular computingArtificial intelligenceCognitive computingData miningKnowledge discovery