Multi-Trigger Poisoning Amplifies Backdoor Vulnerabilities in LLMs

📅 2025-07-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The coexistence mechanisms and interactive effects of multiple triggers in large language model (LLM) backdoors remain poorly understood and difficult to control, posing significant security risks. Method: This paper introduces the first systematic framework, leveraging multi-trigger data poisoning and embedding similarity analysis to reveal that highly similar triggers can robustly coexist and synergistically activate, substantially expanding the attack surface; it further proposes a layer-wise weight-difference-driven selective local retraining strategy that achieves high-precision backdoor removal by updating only a small subset of parameters. Contribution/Results: Experiments demonstrate stable coexistence and long-range robust activation of multi-trigger backdoors; the method achieves >95% backdoor elimination rate and <1% accuracy degradation across multiple benchmarks. This work uncovers deep coupling mechanisms underlying LLM backdoors and establishes a novel, interpretable paradigm for backdoor defense.

Technology Category

Application Category

📝 Abstract
Recent studies have shown that Large Language Models (LLMs) are vulnerable to data poisoning attacks, where malicious training examples embed hidden behaviours triggered by specific input patterns. However, most existing works assume a phrase and focus on the attack's effectiveness, offering limited understanding of trigger mechanisms and how multiple triggers interact within the model. In this paper, we present a framework for studying poisoning in LLMs. We show that multiple distinct backdoor triggers can coexist within a single model without interfering with each other, enabling adversaries to embed several triggers concurrently. Using multiple triggers with high embedding similarity, we demonstrate that poisoned triggers can achieve robust activation even when tokens are substituted or separated by long token spans. Our findings expose a broader and more persistent vulnerability surface in LLMs. To mitigate this threat, we propose a post hoc recovery method that selectively retrains specific model components based on a layer-wise weight difference analysis. Our method effectively removes the trigger behaviour with minimal parameter updates, presenting a practical and efficient defence against multi-trigger poisoning.
Problem

Research questions and friction points this paper is trying to address.

Study multi-trigger backdoor vulnerabilities in LLMs
Analyze interaction of multiple triggers in models
Propose defense against multi-trigger poisoning attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multiple backdoor triggers coexist without interference
Robust activation via high embedding similarity triggers
Layer-wise weight analysis for selective retraining
🔎 Similar Papers
No similar papers found.