🤖 AI Summary
This work addresses the weak adversarial robustness, low cross-prompt attack transferability, and performance degradation under defense of the Segment Anything Model (SAM). We propose the first unified framework jointly addressing evaluation and defense. Methodologically: (1) we design a cross-prompt adversarial attack strategy that significantly improves transferability across diverse prompt types (points, boxes, and masks); (2) we introduce a lightweight, adaptive defense based on Singular Value Decomposition (SVD), fine-tuning only 512 parameters to constrain singular values of critical backbone layers, thereby achieving a balanced trade-off between robustness and segmentation accuracy. Experiments on SAM and SAM2 demonstrate substantial improvements in attack success rates and mean Intersection-over-Union (mIoU) by ≥15% on average, while preserving near-original segmentation performance—indicating negligible degradation in clean-data accuracy.
📝 Abstract
The Segment Anything Model (SAM) is a widely used vision foundation model with diverse applications, including image segmentation, detection, and tracking. Given SAM's wide applications, understanding its robustness against adversarial attacks is crucial for real-world deployment. However, research on SAM's robustness is still in its early stages. Existing attacks often overlook the role of prompts in evaluating SAM's robustness, and there has been insufficient exploration of defense methods to balance the robustness and accuracy. To address these gaps, this paper proposes an adversarial robustness framework designed to evaluate and enhance the robustness of SAM. Specifically, we introduce a cross-prompt attack method to enhance the attack transferability across different prompt types. Besides attacking, we propose a few-parameter adaptation strategy to defend SAM against various adversarial attacks. To balance robustness and accuracy, we use the singular value decomposition (SVD) to constrain the space of trainable parameters, where only singular values are adaptable. Experiments demonstrate that our cross-prompt attack method outperforms previous approaches in terms of attack success rate on both SAM and SAM 2. By adapting only 512 parameters, we achieve at least a 15% improvement in mean intersection over union (mIoU) against various adversarial attacks. Compared to previous defense methods, our approach enhances the robustness of SAM while maximally maintaining its original performance.