TTP: Test-Time Padding for Adversarial Detection and Robust Adaptation on Vision-Language Models

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-language models (e.g., CLIP) achieve strong zero-shot recognition performance but remain vulnerable to adversarial attacks. Existing training-time defenses require labeled data and full retraining, while test-time methods struggle to jointly preserve clean accuracy and adversarial robustness. This paper proposes TTP, a lightweight test-time defense framework. TTP introduces the first general adversarial detection mechanism based on cosine similarity shifts between spatially filled and original CLIP features. It further incorporates a trainable filling module and a similarity-aware ensemble strategy, enhancing adversarial robustness without degrading clean accuracy. Crucially, TTP requires no fine-tuning, no label supervision, and supports zero-shot adaptation. Evaluated across multiple CLIP backbones and fine-grained benchmarks, TTP consistently outperforms prior test-time defenses—achieving substantial gains in adversarial accuracy while maintaining zero loss in clean accuracy.

Technology Category

Application Category

📝 Abstract
Vision-Language Models (VLMs), such as CLIP, have achieved impressive zero-shot recognition performance but remain highly susceptible to adversarial perturbations, posing significant risks in safety-critical scenarios. Previous training-time defenses rely on adversarial fine-tuning, which requires labeled data and costly retraining, while existing test-time strategies fail to reliably distinguish between clean and adversarial inputs, thereby preventing both adversarial robustness and clean accuracy from reaching their optimum. To address these limitations, we propose Test-Time Padding (TTP), a lightweight defense framework that performs adversarial detection followed by targeted adaptation at inference. TTP identifies adversarial inputs via the cosine similarity shift between CLIP feature embeddings computed before and after spatial padding, yielding a universal threshold for reliable detection across architectures and datasets. For detected adversarial cases, TTP employs trainable padding to restore disrupted attention patterns, coupled with a similarity-aware ensemble strategy for a more robust final prediction. For clean inputs, TTP leaves them unchanged by default or optionally integrates existing test-time adaptation techniques for further accuracy gains. Comprehensive experiments on diverse CLIP backbones and fine-grained benchmarks show that TTP consistently surpasses state-of-the-art test-time defenses, delivering substantial improvements in adversarial robustness without compromising clean accuracy. The code for this paper will be released soon.
Problem

Research questions and friction points this paper is trying to address.

Detects adversarial inputs in vision-language models at test time
Restores attention patterns for adversarial cases via trainable padding
Improves robustness without compromising clean accuracy across datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Test-Time Padding detects adversarial inputs via cosine similarity shift
TTP restores disrupted attention patterns using trainable padding
Similarity-aware ensemble strategy enhances robustness for final predictions
🔎 Similar Papers
No similar papers found.
Z
Zhiwei Li
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Y
Yitian Pang
School of Automation, Tsinghua University
W
Weining Wang
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences
Zhenan Sun
Zhenan Sun
Institute of Automation, Chinese Academy of Sciences
BiometricsPattern RecognitionComputer Vision
Q
Qi Li
NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences; University of Chinese Academy of Sciences