🤖 AI Summary
This study presents the first systematic evaluation of the robustness of state-space models—specifically Mamba—against combined software and hardware threats in medical imaging tasks. Using the MedMNIST dataset, the authors simulate diverse input distortions through FGSM and PGD adversarial attacks, PatchDrop occlusions, Gaussian noise, and defocus blur, while modeling hardware faults via targeted and random bit-flip injections. Experimental results reveal a significant drop in Mamba’s accuracy under these perturbations, highlighting critical vulnerabilities that could compromise its reliability in clinical deployment. The findings underscore the fragility of current models in real-world medical settings and emphasize the urgent need for robustness-enhancing mechanisms tailored to the unique demands of safe and trustworthy medical AI applications.
📝 Abstract
State-space models like Mamba offer linear-time sequence processing and low memory, making them attractive for medical imaging. However, their robustness under realistic software and hardware threat models remains underexplored. This paper evaluates Mamba on multiple MedM-NIST classification benchmarks under input-level attacks, including white-box adversarial perturbations (FGSM/PGD), occlusion-based PatchDrop, and common acquisition corruptions (Gaussian noise and defocus blur) as well as hardware-inspired fault attacks emulated in software via targeted and random bit-flip injections into weights and activations. We profile vulnerabilities and quantify impacts on accuracy indicating that defenses are needed for deployment.