Attacking Attention of Foundation Models Disrupts Downstream Tasks

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes structural vulnerabilities in the self-attention mechanisms of vision foundation models (e.g., CLIP, ViT) and proposes the first general adversarial attack method targeting Transformer attention structures. Unlike prior approaches, it requires no task-specific fine-tuning: leveraging gradient-guided attention mask perturbations and targeted interference across multi-head self-attention layers, it enables cross-modal and cross-task black-box transfer attacks. Evaluated on five downstream tasks—image classification, image–text retrieval, image captioning, semantic segmentation, and depth estimation—the method achieves an average attack success rate of 82.3%, substantially outperforming existing baselines. Its core contribution lies in formulating adversarial attacks as *structured perturbations to attention distributions*, thereby decoupling attack design from task-specific adaptation. This establishes a novel paradigm for analyzing and enhancing the robustness of vision foundation models.

Technology Category

Application Category

📝 Abstract
Foundation models represent the most prominent and recent paradigm shift in artificial intelligence.Foundation models are large models, trained on broad data that deliver high accuracy in many downstream tasks, often without fine-tuning. For this reason, models such as CLIP , DINO or Vision Transfomers (ViT), are becoming the bedrock of many industrial AI-powered applications. However, the reliance on pre-trained foundation models also introduces significant security concerns, as these models are vulnerable to adversarial attacks. Such attacks involve deliberately crafted inputs designed to deceive AI systems, jeopardizing their reliability.This paper studies the vulnerabilities of vision foundation models, focusing specifically on CLIP and ViTs, and explores the transferability of adversarial attacks to downstream tasks. We introduce a novel attack, targeting the structure of transformer-based architectures in a task-agnostic fashion.We demonstrate the effectiveness of our attack on several downstream tasks: classification, captioning, image/text retrieval, segmentation and depth estimation.
Problem

Research questions and friction points this paper is trying to address.

Study vulnerabilities in vision foundation models like CLIP and ViTs
Explore adversarial attack transferability to downstream tasks
Introduce task-agnostic attack targeting transformer-based architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Targets transformer-based architectures' attention mechanisms
Task-agnostic adversarial attack on vision foundation models
Demonstrates attack transferability to multiple downstream tasks
🔎 Similar Papers
No similar papers found.