🤖 AI Summary
This work exposes structural vulnerabilities in the self-attention mechanisms of vision foundation models (e.g., CLIP, ViT) and proposes the first general adversarial attack method targeting Transformer attention structures. Unlike prior approaches, it requires no task-specific fine-tuning: leveraging gradient-guided attention mask perturbations and targeted interference across multi-head self-attention layers, it enables cross-modal and cross-task black-box transfer attacks. Evaluated on five downstream tasks—image classification, image–text retrieval, image captioning, semantic segmentation, and depth estimation—the method achieves an average attack success rate of 82.3%, substantially outperforming existing baselines. Its core contribution lies in formulating adversarial attacks as *structured perturbations to attention distributions*, thereby decoupling attack design from task-specific adaptation. This establishes a novel paradigm for analyzing and enhancing the robustness of vision foundation models.
📝 Abstract
Foundation models represent the most prominent and recent paradigm shift in artificial intelligence.Foundation models are large models, trained on broad data that deliver high accuracy in many downstream tasks, often without fine-tuning. For this reason, models such as CLIP , DINO or Vision Transfomers (ViT), are becoming the bedrock of many industrial AI-powered applications. However, the reliance on pre-trained foundation models also introduces significant security concerns, as these models are vulnerable to adversarial attacks. Such attacks involve deliberately crafted inputs designed to deceive AI systems, jeopardizing their reliability.This paper studies the vulnerabilities of vision foundation models, focusing specifically on CLIP and ViTs, and explores the transferability of adversarial attacks to downstream tasks. We introduce a novel attack, targeting the structure of transformer-based architectures in a task-agnostic fashion.We demonstrate the effectiveness of our attack on several downstream tasks: classification, captioning, image/text retrieval, segmentation and depth estimation.