🤖 AI Summary
In federated learning (FL) for brain tumor MRI diagnosis, inference-stage models are vulnerable to stealthy adversarial attacks, compromising clinical reliability.
Method: This paper proposes a privacy-preserving personalized defense framework that jointly leverages Masked Autoencoder (MAE)-based anomaly detection and adaptive diffusion-based purification—enabling localized, real-time attack identification and selective denoising without sharing raw data. The approach integrates personalized FL, MAE-driven anomaly localization, and a lightweight diffusion denoising module.
Contribution/Results: Evaluated on the Br35H dataset, the framework boosts accuracy under strong adversarial attacks from 49.50% to 87.33%, while preserving 97.67% accuracy on clean samples. It thus significantly enhances robustness, privacy preservation, and diagnostic fidelity—advancing the safety and trustworthiness of clinical AI systems.
📝 Abstract
Artificial intelligence (AI) has shown great potential in medical imaging, particularly for brain tumor detection using Magnetic Resonance Imaging (MRI). However, the models remain vulnerable at inference time when they are trained collaboratively through Federated Learning (FL), an approach adopted to protect patient privacy. Adversarial attacks can subtly alter medical scans in ways invisible to the human eye yet powerful enough to mislead AI models, potentially causing serious misdiagnoses. Existing defenses often assume centralized data and struggle to cope with the decentralized and diverse nature of federated medical settings. In this work, we present MedFedPure, a personalized federated learning defense framework designed to protect diagnostic AI models at inference time without compromising privacy or accuracy. MedFedPure combines three key elements: (1) a personalized FL model that adapts to the unique data distribution of each institution; (2) a Masked Autoencoder (MAE) that detects suspicious inputs by exposing hidden perturbations; and (3) an adaptive diffusion-based purification module that selectively cleans only the flagged scans before classification. Together, these steps offer robust protection while preserving the integrity of normal, benign images. We evaluated MedFedPure on the Br35H brain MRI dataset. The results show a significant gain in adversarial robustness, improving performance from 49.50% to 87.33% under strong attacks, while maintaining a high clean accuracy of 97.67%. By operating locally and in real time during diagnosis, our framework provides a practical path to deploying secure, trustworthy, and privacy-preserving AI tools in clinical workflows.
Index Terms: cancer, tumor detection, federated learning, masked autoencoder, diffusion, privacy