Foundation Models in Medical Image Analysis: A Systematic Review and Meta-Analysis

📅 2025-10-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Foundational models in medical image analysis lack a unified framework, hindering systematic understanding of architectural evolution, training paradigms, and clinical translation. Method: We conduct the first structured taxonomy and cross-modal meta-analysis of vision and vision-language foundational models, synthesizing >120 studies to quantitatively characterize trends in multimodal fusion, few-shot adaptation, and clinical deployment. We propose novel pathways—including federated learning for privacy-preserving training, knowledge distillation for model lightweighting, and prompt engineering for zero/few-shot generalization—and systematically evaluate domain adaptation, efficient fine-tuning, interpretability, and prompting strategies for clinical applicability. Contribution/Results: Foundational models significantly outperform conventional methods in zero- and few-shot settings; data usage increasingly favors multi-center, small-scale cohorts; and applications concentrate on lesion segmentation and diagnostic support. This work establishes a theoretical foundation and practical roadmap for standardizing and clinically deploying medical foundational models.

Technology Category

Application Category

📝 Abstract
Recent advancements in artificial intelligence (AI), particularly foundation models (FMs), have revolutionized medical image analysis, demonstrating strong zero- and few-shot performance across diverse medical imaging tasks, from segmentation to report generation. Unlike traditional task-specific AI models, FMs leverage large corpora of labeled and unlabeled multimodal datasets to learn generalized representations that can be adapted to various downstream clinical applications with minimal fine-tuning. However, despite the rapid proliferation of FM research in medical imaging, the field remains fragmented, lacking a unified synthesis that systematically maps the evolution of architectures, training paradigms, and clinical applications across modalities. To address this gap, this review article provides a comprehensive and structured analysis of FMs in medical image analysis. We systematically categorize studies into vision-only and vision-language FMs based on their architectural foundations, training strategies, and downstream clinical tasks. Additionally, a quantitative meta-analysis of the studies was conducted to characterize temporal trends in dataset utilization and application domains. We also critically discuss persistent challenges, including domain adaptation, efficient fine-tuning, computational constraints, and interpretability along with emerging solutions such as federated learning, knowledge distillation, and advanced prompting. Finally, we identify key future research directions aimed at enhancing the robustness, explainability, and clinical integration of FMs, thereby accelerating their translation into real-world medical practice.
Problem

Research questions and friction points this paper is trying to address.

Systematically reviewing foundation models in medical imaging
Analyzing architectural evolution and clinical applications across modalities
Addressing domain adaptation and computational constraints challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages large multimodal datasets for generalized representations
Systematically categorizes vision-only and vision-language foundation models
Conducts quantitative meta-analysis on dataset utilization trends
🔎 Similar Papers
No similar papers found.
P
Praveenbalaji Rajendran
Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
Mojtaba Safari
Mojtaba Safari
Postdoctoral Fellow, Emory University
Medical PhysicsMRIMedical Image Analysis
Wenfeng He
Wenfeng He
Department of Computer Science and Informatics , Emory University, Atlanta, GA 30322
M
Mingzhe Hu
Department of Computer Science and Informatics , Emory University, Atlanta, GA 30322
Shansong Wang
Shansong Wang
Postdoctoral Research Fellow at Emory University
computer visionmultimodal learningfoundation model
J
Jun Zhou
Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322
X
Xiaofeng Yang
Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322; Department of Computer Science and Informatics , Emory University, Atlanta, GA 30322