🤖 AI Summary
Existing backdoor attacks against Transformer models typically require model retraining or architectural modifications, limiting their practicality. This paper proposes a training-free, architecture-agnostic backdoor attack: it first performs selective head pruning guided by head importance estimation, then injects a pre-trained malicious attention head, and finally applies a lightweight, data-driven parameter replacement to ensure stealthy implantation. Crucially, the method requires only a small amount of clean data and basic model knowledge—marking the first work to synergistically combine head pruning and malicious head injection for training-free backdoor injection. Theoretical analysis demonstrates its resilience against mainstream defenses. Extensive experiments across multiple benchmarks show an attack success rate exceeding 99.55%, negligible degradation (<0.3%) in clean accuracy, and successful evasion of four state-of-the-art defense mechanisms—achieving both high stealthiness and strong robustness.
📝 Abstract
Transformer models have demonstrated exceptional performance and have become indispensable in computer vision (CV) and natural language processing (NLP) tasks. However, recent studies reveal that transformers are susceptible to backdoor attacks. Prior backdoor attack methods typically rely on retraining with clean data or altering the model architecture, both of which can be resource-intensive and intrusive. In this paper, we propose Head-wise Pruning and Malicious Injection (HPMI), a novel retraining-free backdoor attack on transformers that does not alter the model's architecture. Our approach requires only a small subset of the original data and basic knowledge of the model architecture, eliminating the need for retraining the target transformer. Technically, HPMI works by pruning the least important head and injecting a pre-trained malicious head to establish the backdoor. We provide a rigorous theoretical justification demonstrating that the implanted backdoor resists detection and removal by state-of-the-art defense techniques, under reasonable assumptions. Experimental evaluations across multiple datasets further validate the effectiveness of HPMI, showing that it 1) incurs negligible clean accuracy loss, 2) achieves at least 99.55% attack success rate, and 3) bypasses four advanced defense mechanisms. Additionally, relative to state-of-the-art retraining-dependent attacks, HPMI achieves greater concealment and robustness against diverse defense strategies, while maintaining minimal impact on clean accuracy.