🤖 AI Summary
Existing backdoor attacks against multimodal pretrained models often rely on explicit visual or cross-modal triggers, which suffer from poor stealth and deployment challenges. This work proposes a text-guided backdoor attack that, for the first time, leverages common textual tokens as covert triggers. By introducing visually imperceptible adversarial perturbations to dynamically modulate the model’s sensitivity to these textual triggers, the method enables fine-grained control over attack intensity. Notably, it leaves the appearance of input images unaltered while achieving high attack success rates, strong concealment, and practical feasibility in both compositional image retrieval (CIR) and visual question answering (VQA) tasks. The approach substantially enhances the practicality and flexibility of backdoor attacks in multimodal settings.
📝 Abstract
Multimodal pretrained models are vulnerable to backdoor attacks, yet most existing methods rely on visual or multimodal triggers, which are impractical since visually embedded triggers rarely occur in real-world data. To overcome this limitation, we propose a novel Text-Guided Backdoor (TGB) attack on multimodal pretrained models, where commonly occurring words in textual descriptions serve as backdoor triggers, significantly improving stealthiness and practicality. Furthermore, we introduce visual adversarial perturbations on poisoned samples to modulate the model's learning of textual triggers, enabling a controllable and adjustable TGB attack. Extensive experiments on downstream tasks built upon multimodal pretrained models, including Composed Image Retrieval (CIR) and Visual Question Answering (VQA), demonstrate that TGB achieves practicality and stealthiness with adjustable attack success rates across diverse realistic settings, revealing critical security vulnerabilities in multimodal pretrained models.