🤖 AI Summary
Pretrained language models are vulnerable to stealthy backdoor attacks—e.g., those triggered by syntactic or stylistic perturbations—while conventional detection and mitigation methods rely on trigger knowledge or clean reference models, limiting practicality. This paper proposes a trigger-agnostic, reference-free defense framework based on attention head pruning. We systematically design and evaluate six knowledge-free pruning strategies: gradient-based, inter-layer variance, structured L1/L2 sparsification, random ensemble, reinforcement learning–guided, and Bayesian uncertainty–driven pruning. A validation accuracy–aware dynamic pruning scheduler adaptively controls pruning intensity. Experiments show that gradient pruning achieves optimal defense against syntactic backdoors, whereas reinforcement learning–guided and Bayesian uncertainty–based pruning excel against stylistic triggers. Overall, our approach substantially reduces backdoor success rates while preserving downstream task performance.
📝 Abstract
Backdoor attacks are a significant threat to the performance and integrity of pre-trained language models. Although such models are routinely fine-tuned for downstream NLP tasks, recent work shows they remain vulnerable to backdoor attacks that survive vanilla fine-tuning. These attacks are difficult to defend because end users typically lack knowledge of the attack triggers. Such attacks consist of stealthy malicious triggers introduced through subtle syntactic or stylistic manipulations, which can bypass traditional detection and remain in the model, making post-hoc purification essential. In this study, we explore whether attention-head pruning can mitigate these threats without any knowledge of the trigger or access to a clean reference model. To this end, we design and implement six pruning-based strategies: (i) gradient-based pruning, (ii) layer-wise variance pruning, (iii) gradient-based pruning with structured L1/L2 sparsification, (iv) randomized ensemble pruning, (v) reinforcement-learning-guided pruning, and (vi) Bayesian uncertainty pruning. Each method iteratively removes the least informative heads while monitoring validation accuracy to avoid over-pruning. Experimental evaluation shows that gradient-based pruning performs best while defending the syntactic triggers, whereas reinforcement learning and Bayesian pruning better withstand stylistic attacks.