🤖 AI Summary
Existing pixel-level backdoor attacks against vision-language models (VLMs) suffer from poor stealthiness and are readily detectable by image-level defenses.
Method: We propose the first semantic-concept-level backdoor attack paradigm, leveraging concept bottleneck models (CBMs) to intervene in internal concept activations. Our approach introduces a concept-threshold-driven data poisoning strategy that embeds imperceptible semantic triggers into natural images, enabling label substitution during training and implicit activation at inference—without any pixel modification.
Contribution/Results: The attack evades mainstream image-based defenses, achieves high attack success rates (>92%) across multiple VLM architectures and datasets, and preserves original task performance with degradation <1.5%. This work is the first to empirically reveal and exploit security vulnerabilities of multimodal models at the interpretable concept level, establishing a new benchmark and perspective for semantic-level adversarial robustness research.
📝 Abstract
Vision-Language Models (VLMs) have achieved impressive progress in multimodal text generation, yet their rapid adoption raises increasing concerns about security vulnerabilities. Existing backdoor attacks against VLMs primarily rely on explicit pixel-level triggers or imperceptible perturbations injected into images. While effective, these approaches reduce stealthiness and remain vulnerable to image-based defenses. We introduce concept-guided backdoor attacks, a new paradigm that operates at the semantic concept level rather than on raw pixels. We propose two different attacks. The first, Concept-Thresholding Poisoning (CTP), uses explicit concepts in natural images as triggers: only samples containing the target concept are poisoned, causing the model to behave normally in all other cases but consistently inject malicious outputs whenever the concept appears. The second, CBL-Guided Unseen Backdoor (CGUB), leverages a Concept Bottleneck Model (CBM) during training to intervene on internal concept activations, while discarding the CBM branch at inference time to keep the VLM unchanged. This design enables systematic replacement of a targeted label in generated text (for example, replacing "cat" with "dog"), even when the replacement behavior never appears in the training data. Experiments across multiple VLM architectures and datasets show that both CTP and CGUB achieve high attack success rates while maintaining moderate impact on clean-task performance. These findings highlight concept-level vulnerabilities as a critical new attack surface for VLMs.