MedCoDi-M: A Multi-Prompt Foundation Model for Multimodal Medical Data Generation

📅 2025-01-08
📈 Citations: 0
✹ Influential: 0
📄 PDF
đŸ€– AI Summary
To address the challenges of multimodal fusion and weak diagnostic support in clinical AI deployment—stemming from data scarcity and privacy constraints—this work introduces the first 6.77B-parameter multimodal foundation model tailored for radiological practice. We propose a novel Multi-Prompt training paradigm that establishes a unified, contrastive learning–driven latent space, enabling synchronous cross-modal generation and semantic alignment between X-ray images and radiology reports. Technically, the model integrates Latent Diffusion Models (LDMs), large-scale medical multimodal pretraining, and conditional control mechanisms. On MIMIC-CXR, it significantly outperforms five state-of-the-art baselines. Its clinical plausibility is validated via a radiologist-conducted Visual Turing Test. Moreover, the model demonstrates robust generalization under challenging real-world conditions—including data anonymization, few-shot learning, and severe class imbalance—highlighting its practical viability for clinical deployment.

Technology Category

Application Category

📝 Abstract
Artificial Intelligence is revolutionizing medical practice, enhancing diagnostic accuracy and healthcare delivery. However, its adaptation in medical settings still faces significant challenges, related to data availability and privacy constraints. Synthetic data has emerged as a promising solution to mitigate these issues, addressing data scarcity while preserving privacy. Recently, Latent Diffusion Models have emerged as a powerful tool for generating high-quality synthetic data. Meanwhile, the integration of different modalities has gained interest, emphasizing the need of models capable of handle multimodal medical data.Existing approaches struggle to integrate complementary information and lack the ability to generate modalities simultaneously. To address this challenge, we present MedCoDi-M, a 6.77-billion-parameter model, designed for multimodal medical data generation, that, following Foundation Model paradigm, exploits contrastive learning and large quantity of data to build a shared latent space which capture the relationships between different data modalities. Further, we introduce the Multi-Prompt training technique, which significantly boosts MedCoDi-M's generation under different settings. We extensively validate MedCoDi-M: first we benchmark it against five competitors on the MIMIC-CXR dataset, a state-of-the-art dataset for Chest X-ray and radiological report generation. Secondly, we perform a Visual Turing Test with expert radiologists to assess the realism and clinical relevance of the generated data, ensuring alignment with real-world scenarios. Finally, we assess the utility of MedCoDi-M in addressing key challenges in the medical field, such as anonymization, data scarcity and imbalance learning. The results are promising, demonstrating the applicability of MedCoDi-M in medical contexts. Project page is at https://cosbidev.github.io/MedCoDi-M/.
Problem

Research questions and friction points this paper is trying to address.

Medical AI
Data scarcity
Privacy protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Medical Data Generation
Contrastive Learning
Privacy-preserving Diversity
🔎 Similar Papers
No similar papers found.
Daniele Molino
Daniele Molino
Phd in Artificial Intelligence, UniversitĂ  Campus Bio-medico di Roma
Intelligenza ArtificialeModelli Generativi
Francesco Di Feola
Francesco Di Feola
UmeÄ University
AImachine learningdeep learningcomputer vision
E
E. Faiella
Department of Radiology and Interventional Radiology, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy; Research Unit of Radiology and Interventional Radiology, Department of Medicine and Surgery, UniversitĂ  Campus Bio-Medico di Roma, Rome, Italy
D
Deborah Fazzini
Department of Diagnostic Imaging and Stereotactic Radiosurgery, Centro Diagnostico Italiano S.p.A., Milano, Italy
D
D. Santucci
Department of Radiology and Interventional Radiology, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy
Linlin Shen
Linlin Shen
Shenzhen University
Deep LearningComputer VisionFacial Analysis/RecognitionMedical Image Analysis
V
V. Guarrasi
Research Unit of Computer Systems and Bioinformatics, Department of Engineering, UniversitĂ  Campus Bio-Medico di Roma, Roma, Europe
P
P. Soda
Research Unit of Computer Systems and Bioinformatics, Department of Engineering, Università Campus Bio-Medico di Roma, Roma, Europe; Department of Diagnostics and Intervention, Radiation Physics, Biomedical Engineering, UmeÄ University, UmeÄ, Sweden