Detecting and Mitigating Adversarial Attacks on Deep Learning-Based MRI Reconstruction Without Any Retraining

๐Ÿ“… 2025-01-03
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Deep learningโ€“based MRI reconstruction models are vulnerable to adversarial attacks, and existing defenses typically require model retraining. To address this, we propose an online detection-and-mitigation framework that operates without model retraining. Our method leverages a physics-driven cyclic measurement consistency criterion for automatic adversarial detection and formulates a robust input-domain optimization objective under local spherical constraints to suppress k-space perturbations. Key contributions include: (i) the first retraining-free adversarial detection-and-mitigation paradigm; (ii) an interpretable cyclic consistency verification mechanism grounded in MRI physics; and (iii) a localized optimization strategy balancing stability and reconstruction fidelity. Extensive experiments across multiple datasets, diverse attack types and intensities, and various plug-and-play deep learning (PD-DL) architectures demonstrate significant improvements in PSNR and SSIM over state-of-the-art retraining-dependent methods, with superior visual reconstruction quality.

Technology Category

Application Category

๐Ÿ“ Abstract
Deep learning (DL) methods, especially those based on physics-driven DL, have become the state-of-the-art for reconstructing sub-sampled magnetic resonance imaging (MRI) data. However, studies have shown that these methods are susceptible to small adversarial input perturbations, or attacks, resulting in major distortions in the output images. Various strategies have been proposed to reduce the effects of these attacks, but they require retraining and may lower reconstruction quality for non-perturbed/clean inputs. In this work, we propose a novel approach for detecting and mitigating adversarial attacks on MRI reconstruction models without any retraining. Our detection strategy is based on the idea of cyclic measurement consistency. The output of the model is mapped to another set of MRI measurements for a different sub-sampling pattern, and this synthesized data is reconstructed with the same model. Intuitively, without an attack, the second reconstruction is expected to be consistent with the first, while with an attack, disruptions are present. Subsequently, this idea is extended to devise a novel objective function, which is minimized within a small ball around the attack input for mitigation. Experimental results show that our method substantially reduces the impact of adversarial perturbations across different datasets, attack types/strengths and PD-DL networks, and qualitatively and quantitatively outperforms conventional mitigation methods that involve retraining.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning
MRI Image Reconstruction
Adversarial Attacks Mitigation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Attack Mitigation
MRI Image Reconstruction
Cyclic Consistency Measurement
๐Ÿ”Ž Similar Papers
No similar papers found.
Mahdi Saberi
Mahdi Saberi
Ph.D. Candidate, University of Minnesota
Deep LearningInverse ProblemsAdversarial Attacks
C
Chi Zhang
Department of Electrical and Computer Engineering, University of Minnesota, Center for Magnetic Resonance Research, University of Minnesota, Department of Radiology, Stanford University
Mehmet Akcakaya
Mehmet Akcakaya
Jim and Sara Anderson Chair, Professor, University of Minnesota
artificial intelligenceMRIcomputational imaginginverse problemsimage processing