Proactive Adversarial Defense: Harnessing Prompt Tuning in Vision-Language Models to Detect Unseen Backdoored Images

📅 2024-12-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of detecting visual backdoor attacks, this paper introduces learnable textual prompts into vision-language models (VLMs) for the first time, proposing a zero-shot detection method that requires no model fine-tuning and makes no assumptions about attack priors. Methodologically, it leverages prompt tuning to drive cross-modal feature alignment and designs a contrastive, text-guided discriminative mechanism—enabling the model to actively identify backdoored images containing unknown triggers during both training and inference. Crucially, the approach avoids any weight modification or trigger-specific assumptions. Evaluated on two mainstream benchmarks, it achieves an average detection accuracy of 86%, substantially outperforming existing methods. This work establishes a new benchmark for secure VLM deployment and advances zero-shot, assumption-free backdoor detection in multimodal learning.

Technology Category

Application Category

📝 Abstract
Backdoor attacks pose a critical threat by embedding hidden triggers into inputs, causing models to misclassify them into target labels. While extensive research has focused on mitigating these attacks in object recognition models through weight fine-tuning, much less attention has been given to detecting backdoored samples directly. Given the vast datasets used in training, manual inspection for backdoor triggers is impractical, and even state-of-the-art defense mechanisms fail to fully neutralize their impact. To address this gap, we introduce a groundbreaking method to detect unseen backdoored images during both training and inference. Leveraging the transformative success of prompt tuning in Vision Language Models (VLMs), our approach trains learnable text prompts to differentiate clean images from those with hidden backdoor triggers. Experiments demonstrate the exceptional efficacy of this method, achieving an impressive average accuracy of 86% across two renowned datasets for detecting unseen backdoor triggers, establishing a new standard in backdoor defense.
Problem

Research questions and friction points this paper is trying to address.

Backdoor Attacks
Image Recognition
Security Vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Language Model
Backdoor Attack Detection
Automated Identification
🔎 Similar Papers
No similar papers found.
Kyle Stein
Kyle Stein
Ph.D. Candidate, University of West Florida
Deep LearningComputer VisionCybersecurity
Andrew A. Mahyari
Andrew A. Mahyari
UWF
AIMachine LearningComputer VisionSignal Processing
G
Guillermo A. Francia
Center for Cybersecurity, University of West Florida
E
Eman El-Sheikh
Center for Cybersecurity, University of West Florida