🤖 AI Summary
Vision-language models such as CLIP are highly vulnerable to imperceptible adversarial attacks, posing significant threats to the security and reliability of cross-modal tasks. This work presents the first systematic survey of adversarial defense strategies in this domain and introduces three major paradigms: training-time defenses (e.g., adversarial fine-tuning), test-time adaptive defenses (involving parameter updates during inference), and training-free defenses (leveraging input purification or feature perturbation). Through a comprehensive comparison of these approaches in terms of robustness, computational overhead, and generalization capability, the study clarifies their respective applicability and limitations. The analysis provides a clear technical roadmap and theoretical foundation for future research in securing vision-language models against adversarial threats.
📝 Abstract
The widespread use of Vision Language Models (VLMs, e.g. CLIP) has raised concerns about their vulnerability to sophisticated and imperceptible adversarial attacks. These attacks could compromise model performance and system security in cross-modal tasks. To address this challenge, three main defense paradigms have been proposed: Trainingtime Defense, Test-time Adaptation Defense, and Training-free Defense. Training-time Defense involves modifying the training process, typically through adversarial fine-tuning to improve the robustness to adversarial examples. While effective, this approach requires substantial computational resources and may not generalize across all adversarial attacks. Test-time Adaptation Defense focuses on adapting the model at inference time by updating its parameters to handle unlabeled adversarial examples, offering flexibility but often at the cost of increased complexity and computational overhead. Training-free Defense avoids modifying the model itself, instead focusing on altering the adversarial inputs or their feature embeddings, which enforces input perturbations to mitigate the impact of attacks without additional training. This survey reviews the latest advancements in adversarial defense strategies for VLMs, highlighting the strengths and limitations of such approaches and discussing ongoing challenges in enhancing the robustness of VLMs.