AMM-Diff: Adaptive Multi-Modality Diffusion Network for Missing Modality Imputation

πŸ“… 2025-01-22
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Clinical MR multi-modal neuroimaging often suffers from missing modalities, hindering accurate brain tumor diagnosis and segmentation. To address this, we propose an adaptive multi-modal diffusion generative model. Our method introduces a novel dynamic conditional generation architecture capable of handling arbitrary input modality combinations. We further design an Image-Frequency Fusion Network (IFFN) that learns unified representations via self-supervised frequency-domain pretext tasks and explicitly models high-frequency Fourier components to enhance structural fidelity. Evaluated on the BraTS 2021 dataset, our approach significantly outperforms state-of-the-art single- and dual-modal completion methods: it achieves +3.2 dB PSNR, +0.08 SSIM, and improves downstream tumor segmentation Dice score by 2.7%. This work establishes a new paradigm for robust and precise brain tumor analysis under modality absence.

Technology Category

Application Category

πŸ“ Abstract
In clinical practice, full imaging is not always feasible, often due to complex acquisition protocols, stringent privacy regulations, or specific clinical needs. However, missing MR modalities pose significant challenges for tasks like brain tumor segmentation, especially in deep learning-based segmentation, as each modality provides complementary information crucial for improving accuracy. A promising solution is missing data imputation, where absent modalities are generated from available ones. While generative models have been widely used for this purpose, most state-of-the-art approaches are limited to single or dual target translations, lacking the adaptability to generate missing modalities based on varying input configurations. To address this, we propose an Adaptive Multi-Modality Diffusion Network (AMM-Diff), a novel diffusion-based generative model capable of handling any number of input modalities and generating the missing ones. We designed an Image-Frequency Fusion Network (IFFN) that learns a unified feature representation through a self-supervised pretext task across the full input modalities and their selected high-frequency Fourier components. The proposed diffusion model leverages this representation, encapsulating prior knowledge of the complete modalities, and combines it with an adaptive reconstruction strategy to achieve missing modality completion. Experimental results on the BraTS 2021 dataset demonstrate the effectiveness of our approach.
Problem

Research questions and friction points this paper is trying to address.

Brain Tumor Diagnosis
Image Reconstruction
Multi-modal Imaging
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Multimodal Diffusion Network
Flexible Missing Image Handling
Brain Tumor Recognition
πŸ”Ž Similar Papers
No similar papers found.
A
Aghiles Kebaili
Quantif, University of Rouen-Normandy, Rouen, 76183, France
J
J. Lapuyade-Lahorgue
Quantif, University of Rouen-Normandy, Rouen, 76183, France
P
Pierre Vera
Quantif, University of Rouen-Normandy, Rouen, 76183, France; CLCC Henri Becquerel, Rouen, 76038, France
Su Ruan
Su Ruan
UniversitΓ© de Rouen Normandie, France
data fusionmedical image analysis and processingmachine learning