🤖 AI Summary
Vision-language models (e.g., CLIP) suffer from degraded image-text semantic alignment under adversarial perturbations, and existing defenses rely on labeled data—rendering them inapplicable to zero-shot settings. Method: We propose a training-free, label-free test-time defense framework—Self-Calibrating Consistency (SCC)—which uncovers CLIP’s dual vulnerability in semantic and viewpoint dimensions. SCC employs soft pseudo-label guidance, multi-view augmentation, cross-modal alignment regularization, and spatial prediction consistency constraints to enable dynamic robust inference. Contribution/Results: Evaluated across 22 benchmarks, SCC significantly enhances CLIP’s zero-shot robustness against diverse adversarial attacks while preserving original accuracy. Moreover, it generalizes out-of-the-box to variant models such as BioMedCLIP, demonstrating broad applicability without architectural or training modifications.
📝 Abstract
Pre-trained vision-language models (VLMs) such as CLIP have demonstrated strong zero-shot capabilities across diverse domains, yet remain highly vulnerable to adversarial perturbations that disrupt image-text alignment and compromise reliability. Existing defenses typically rely on adversarial fine-tuning with labeled data, limiting their applicability in zero-shot settings. In this work, we identify two key weaknesses of current CLIP adversarial attacks -- lack of semantic guidance and vulnerability to view variations -- collectively termed semantic and viewpoint fragility. To address these challenges, we propose Self-Calibrated Consistency (SCC), an effective test-time defense. SCC consists of two complementary modules: Semantic consistency, which leverages soft pseudo-labels from counterattack warm-up and multi-view predictions to regularize cross-modal alignment and separate the target embedding from confusable negatives; and Spatial consistency, aligning perturbed visual predictions via augmented views to stabilize inference under adversarial perturbations. Together, these modules form a plug-and-play inference strategy. Extensive experiments on 22 benchmarks under diverse attack settings show that SCC consistently improves the zero-shot robustness of CLIP while maintaining accuracy, and can be seamlessly integrated with other VLMs for further gains. These findings highlight the great potential of establishing an adversarially robust paradigm from CLIP, with implications extending to broader vision-language domains such as BioMedCLIP.