🤖 AI Summary
Existing text backdoor attacks predominantly rely on single explicit triggers, rendering them vulnerable to detection and struggling to balance stealthiness with attack effectiveness. To address this, we propose a syntactic–sentiment dual-trigger invisible backdoor attack framework. Our method introduces the first synergistic triggering mechanism that jointly leverages grammatical structure and sentiment polarity—supporting both independent and joint activation of the two triggers. We design an adversarial poisoning sample generation approach grounded in dependency parsing and fine-grained sentiment modeling, augmented by a dynamic poisoning rate adaptation strategy. Experiments demonstrate near-perfect attack success rates (∼100%), competitive performance against strong insertion-based methods, and superior efficacy over abstract-feature-based approaches. Crucially, our framework significantly enhances stealthiness and robustness, effectively mitigating inherent limitations of single-trigger paradigms.
📝 Abstract
At present, all textual backdoor attack methods are based on single triggers: for example, inserting specific content into the text to activate the backdoor; or changing the abstract text features. The former is easier to be identified by existing defense strategies due to its obvious characteristics; the latter, although improved in invisibility, has certain shortcomings in terms of attack performance, construction of poisoned datasets, and selection of the final poisoning rate. On this basis, this paper innovatively proposes a Dual-Trigger backdoor attack based on syntax and mood, and optimizes the construction of the poisoned dataset and the selection strategy of the final poisoning rate. A large number of experimental results show that this method significantly outperforms the previous methods based on abstract features in attack performance, and achieves comparable attack performance (almost 100% attack success rate) with the insertion-based method. In addition, the two trigger mechanisms included in this method can be activated independently in the application phase of the model, which not only improves the flexibility of the trigger style, but also enhances its robustness against defense strategies. These results profoundly reveal that textual backdoor attacks are extremely harmful and provide a new perspective for security protection in this field.