Backdooring Vision-Language Models with Out-Of-Distribution Data

📅 2024-10-02
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of vision-language models (VLMs) to backdoor attacks under data-free settings—i.e., without access to original training data. We propose VLOOD, the first stealthy, semantics-preserving backdoor attack framework for VLMs that operates solely on out-of-distribution (OOD) data. VLOOD synthesizes triggers via OOD-driven generation, optimizes cross-modal alignment perturbations, employs a frozen fine-tuning strategy, and enforces semantic consistency constraints—thereby enabling high-fidelity backdoor injection without degrading the model’s native functionality. Evaluated on COCO image captioning and VQA-v2, VLOOD achieves over 92% attack success rate while inducing less than 1.5% degradation in original task performance. This work is the first to empirically demonstrate severe security risks for VLMs under realistic, resource-constrained threat models, and establishes a new benchmark for evaluating VLM robustness and informing defense mechanisms.

Technology Category

Application Category

📝 Abstract
The emergence of Vision-Language Models (VLMs) represents a significant advancement in integrating computer vision with Large Language Models (LLMs) to generate detailed text descriptions from visual inputs. Despite their growing importance, the security of VLMs, particularly against backdoor attacks, is under explored. Moreover, prior works often assume attackers have access to the original training data, which is often unrealistic. In this paper, we address a more practical and challenging scenario where attackers must rely solely on Out-Of-Distribution (OOD) data. We introduce VLOOD (Backdooring Vision-Language Models with Out-of-Distribution Data), a novel approach with two key contributions: (1) demonstrating backdoor attacks on VLMs in complex image-to-text tasks while minimizing degradation of the original semantics under poisoned inputs, and (2) proposing innovative techniques for backdoor injection without requiring any access to the original training data. Our evaluation on image captioning and visual question answering (VQA) tasks confirms the effectiveness of VLOOD, revealing a critical security vulnerability in VLMs and laying the foundation for future research on securing multimodal models against sophisticated threats.
Problem

Research questions and friction points this paper is trying to address.

Explores security vulnerabilities in Vision-Language Models (VLMs).
Addresses backdoor attacks using Out-Of-Distribution (OOD) data.
Proposes techniques for backdoor injection without original training data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Backdoor attacks using Out-Of-Distribution data
Minimizes semantic degradation in poisoned inputs
No access to original training data required
🔎 Similar Papers