🤖 AI Summary
To address the challenge of detecting visual backdoor attacks, this paper introduces learnable textual prompts into vision-language models (VLMs) for the first time, proposing a zero-shot detection method that requires no model fine-tuning and makes no assumptions about attack priors. Methodologically, it leverages prompt tuning to drive cross-modal feature alignment and designs a contrastive, text-guided discriminative mechanism—enabling the model to actively identify backdoored images containing unknown triggers during both training and inference. Crucially, the approach avoids any weight modification or trigger-specific assumptions. Evaluated on two mainstream benchmarks, it achieves an average detection accuracy of 86%, substantially outperforming existing methods. This work establishes a new benchmark for secure VLM deployment and advances zero-shot, assumption-free backdoor detection in multimodal learning.
📝 Abstract
Backdoor attacks pose a critical threat by embedding hidden triggers into inputs, causing models to misclassify them into target labels. While extensive research has focused on mitigating these attacks in object recognition models through weight fine-tuning, much less attention has been given to detecting backdoored samples directly. Given the vast datasets used in training, manual inspection for backdoor triggers is impractical, and even state-of-the-art defense mechanisms fail to fully neutralize their impact. To address this gap, we introduce a groundbreaking method to detect unseen backdoored images during both training and inference. Leveraging the transformative success of prompt tuning in Vision Language Models (VLMs), our approach trains learnable text prompts to differentiate clean images from those with hidden backdoor triggers. Experiments demonstrate the exceptional efficacy of this method, achieving an impressive average accuracy of 86% across two renowned datasets for detecting unseen backdoor triggers, establishing a new standard in backdoor defense.