Robust Training with Data Augmentation for Medical Imaging Classification

📅 2025-06-20
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Medical imaging classification models are vulnerable to adversarial attacks and distributional shifts, compromising clinical reliability. To address this, we propose the Robust Training and Data Augmentation (RTDA) framework—a synergistic approach that unifies adaptive robust training with anatomical-structure-aware multimodal image augmentation for the first time. RTDA integrates gradient-regularized adversarial training, geometric and intensity-based augmentations, and cross-modal consistency constraints. Evaluated on benchmark datasets of mammography, X-ray, and ultrasound imaging, RTDA preserves clean accuracy above 94% while improving adversarial robustness by 18.7% on average and out-of-distribution generalization by 9.3%. Its core contribution lies in establishing a unified training paradigm that jointly optimizes clean accuracy, adversarial robustness, and out-of-distribution generalization—thereby advancing the clinical deployability of medical AI systems.

Technology Category

Application Category

📝 Abstract
Deep neural networks are increasingly being used to detect and diagnose medical conditions using medical imaging. Despite their utility, these models are highly vulnerable to adversarial attacks and distribution shifts, which can affect diagnostic reliability and undermine trust among healthcare professionals. In this study, we propose a robust training algorithm with data augmentation (RTDA) to mitigate these vulnerabilities in medical image classification. We benchmark classifier robustness against adversarial perturbations and natural variations of RTDA and six competing baseline techniques, including adversarial training and data augmentation approaches in isolation and combination, using experimental data sets with three different imaging technologies (mammograms, X-rays, and ultrasound). We demonstrate that RTDA achieves superior robustness against adversarial attacks and improved generalization performance in the presence of distribution shift in each image classification task while maintaining high clean accuracy.
Problem

Research questions and friction points this paper is trying to address.

Enhance robustness of medical imaging classification against adversarial attacks
Improve model generalization under data distribution shifts
Maintain high accuracy in medical image classification tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Robust training algorithm with data augmentation
Mitigates adversarial attacks and distribution shifts
Maintains high accuracy across imaging technologies
🔎 Similar Papers
No similar papers found.
J
Josu'e Martinez-Martinez
University of Connecticut, 371 Fairfield Way, Storrs, Connecticut, USA
O
Olivia Brown
MIT Lincoln Laboratory, 244 Wood Street, Lexington, Massachusetts, USA
Mostafa Karami
Mostafa Karami
Graduate Research Assistant, Computer Science & Engineering, University of Connecticut
Artificial IntelligenceMachine LearningComputer VisionSignal ProcessingOptimization
Sheida Nabavi
Sheida Nabavi
University of Connecticut