Advancements in Medical Image Classification through Fine-Tuning Natural Domain Foundation Models

📅 2025-05-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically evaluates the transferability of natural-image foundation models—including DINOv2, MAE, VMamba, CoCa, SAM2, and AIMv2—to medical image classification across four low-resource benchmarks: breast (CBIS-DDSM), skin (ISIC2019), fundus (APTOS2019), and chest X-ray (CheXpert). Using linear probing, full fine-tuning, and prompt tuning within PyTorch, we implement lightweight domain adaptation without medical pretraining. Our key contribution is the first empirical demonstration that state-of-the-art vision foundation models pretrained solely on natural images significantly improve few-shot medical classification performance. AIMv2, DINOv2, and SAM2 achieve new SOTA results across most tasks, yielding average accuracy gains of 3.2–7.8% over prior baselines. These findings reveal the strong generalization capacity of generic visual foundation models in label-scarce medical settings, establishing a novel paradigm for deploying high-performance AI in resource-constrained clinical environments.

Technology Category

Application Category

📝 Abstract
Using massive datasets, foundation models are large-scale, pre-trained models that perform a wide range of tasks. These models have shown consistently improved results with the introduction of new methods. It is crucial to analyze how these trends impact the medical field and determine whether these advancements can drive meaningful change. This study investigates the application of recent state-of-the-art foundation models, DINOv2, MAE, VMamba, CoCa, SAM2, and AIMv2, for medical image classification. We explore their effectiveness on datasets including CBIS-DDSM for mammography, ISIC2019 for skin lesions, APTOS2019 for diabetic retinopathy, and CHEXPERT for chest radiographs. By fine-tuning these models and evaluating their configurations, we aim to understand the potential of these advancements in medical image classification. The results indicate that these advanced models significantly enhance classification outcomes, demonstrating robust performance despite limited labeled data. Based on our results, AIMv2, DINOv2, and SAM2 models outperformed others, demonstrating that progress in natural domain training has positively impacted the medical domain and improved classification outcomes. Our code is publicly available at: https://github.com/sajjad-sh33/Medical-Transfer-Learning.
Problem

Research questions and friction points this paper is trying to address.

Evaluating foundation models for medical image classification
Assessing performance on diverse medical datasets
Determining impact of natural domain training on medical tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fine-tuning natural domain foundation models
Evaluating models on diverse medical datasets
Achieving robust performance with limited data
🔎 Similar Papers
M
Mobina Mansoori
Intelligent Signal & Information Processing (I-SIP) Lab, Concordia University, Canada
S
Sajjad Shahabodini
Intelligent Signal & Information Processing (I-SIP) Lab, Concordia University, Canada
Farnoush Bayatmakou
Farnoush Bayatmakou
Concordia University
Machine LearningMedical Image ProcessingText Mining
J
J. Abouei
Department of Electrical Engineering, Yazd University, Iran
K
Konstantinos N. Plataniotis
Department of Electrical and Computer Engineering, University of Toronto
A
Arash Mohammadi
Intelligent Signal & Information Processing (I-SIP) Lab, Concordia University, Canada