🤖 AI Summary
Vision-language models (e.g., CLIP) achieve strong zero-shot recognition performance but remain vulnerable to adversarial attacks. Existing training-time defenses require labeled data and full retraining, while test-time methods struggle to jointly preserve clean accuracy and adversarial robustness. This paper proposes TTP, a lightweight test-time defense framework. TTP introduces the first general adversarial detection mechanism based on cosine similarity shifts between spatially filled and original CLIP features. It further incorporates a trainable filling module and a similarity-aware ensemble strategy, enhancing adversarial robustness without degrading clean accuracy. Crucially, TTP requires no fine-tuning, no label supervision, and supports zero-shot adaptation. Evaluated across multiple CLIP backbones and fine-grained benchmarks, TTP consistently outperforms prior test-time defenses—achieving substantial gains in adversarial accuracy while maintaining zero loss in clean accuracy.
📝 Abstract
Vision-Language Models (VLMs), such as CLIP, have achieved impressive zero-shot recognition performance but remain highly susceptible to adversarial perturbations, posing significant risks in safety-critical scenarios. Previous training-time defenses rely on adversarial fine-tuning, which requires labeled data and costly retraining, while existing test-time strategies fail to reliably distinguish between clean and adversarial inputs, thereby preventing both adversarial robustness and clean accuracy from reaching their optimum. To address these limitations, we propose Test-Time Padding (TTP), a lightweight defense framework that performs adversarial detection followed by targeted adaptation at inference. TTP identifies adversarial inputs via the cosine similarity shift between CLIP feature embeddings computed before and after spatial padding, yielding a universal threshold for reliable detection across architectures and datasets. For detected adversarial cases, TTP employs trainable padding to restore disrupted attention patterns, coupled with a similarity-aware ensemble strategy for a more robust final prediction. For clean inputs, TTP leaves them unchanged by default or optionally integrates existing test-time adaptation techniques for further accuracy gains. Comprehensive experiments on diverse CLIP backbones and fine-grained benchmarks show that TTP consistently surpasses state-of-the-art test-time defenses, delivering substantial improvements in adversarial robustness without compromising clean accuracy. The code for this paper will be released soon.