TokenSwap: Backdoor Attack on the Compositional Understanding of Large Vision-Language Models

📅 2025-09-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LVLM backdoor attacks rely on fixed target patterns, making them vulnerable to detection due to high-frequency triggering. This work proposes TokenSwap—the first stealthy backdoor attack targeting visual relational understanding: instead of enforcing generation of predefined text, it injects visual triggers that induce syntactic role swapping (e.g., subject–object inversion) with respect to keyword tokens, causing the model to produce descriptions with correct objects but incorrect relations—thereby degrading compositional reasoning. To strengthen misaligned relation learning, we introduce an adaptive token-weighted loss. TokenSwap achieves high attack success rates across multiple LVLMs and benchmarks while significantly reducing detectability. It represents the first backdoor attack shifting focus from content generation to structured semantic understanding—offering superior stealthiness, cross-model generalizability, and practical applicability.

Technology Category

Application Category

📝 Abstract
Large vision-language models (LVLMs) have achieved impressive performance across a wide range of vision-language tasks, while they remain vulnerable to backdoor attacks. Existing backdoor attacks on LVLMs aim to force the victim model to generate a predefined target pattern, which is either inserted into or replaces the original content. We find that these fixed-pattern attacks are relatively easy to detect, because the attacked LVLM tends to memorize such frequent patterns in the training dataset, thereby exhibiting overconfidence on these targets given poisoned inputs. To address these limitations, we introduce TokenSwap, a more evasive and stealthy backdoor attack that focuses on the compositional understanding capabilities of LVLMs. Instead of enforcing a fixed targeted content, TokenSwap subtly disrupts the understanding of object relationships in text. Specifically, it causes the backdoored model to generate outputs that mention the correct objects in the image but misrepresent their relationships (i.e., bags-of-words behavior). During training, TokenSwap injects a visual trigger into selected samples and simultaneously swaps the grammatical roles of key tokens in the corresponding textual answers. However, the poisoned samples exhibit only subtle differences from the original ones, making it challenging for the model to learn the backdoor behavior. To address this, TokenSwap employs an adaptive token-weighted loss that explicitly emphasizes the learning of swapped tokens, such that the visual triggers and bags-of-words behavior are associated. Extensive experiments demonstrate that TokenSwap achieves high attack success rates while maintaining superior evasiveness and stealthiness across multiple benchmarks and various LVLM architectures.
Problem

Research questions and friction points this paper is trying to address.

TokenSwap creates stealthy backdoor attacks on vision-language models
It disrupts object relationship understanding instead of fixed outputs
The attack swaps grammatical roles using visual triggers adaptively
Innovation

Methods, ideas, or system contributions that make the work stand out.

Swaps grammatical roles of key tokens
Uses adaptive token-weighted loss function
Focuses on disrupting object relationship understanding
🔎 Similar Papers
No similar papers found.