π€ AI Summary
This work investigates the persistence of backdoor attacks against large language models (LLMs) under user-driven continual fine-tuning. Addressing the challenge that multi-stage fine-tuning often erodes backdoor functionality, we propose P-Trojanβa novel method that aligns poisoned gradients with clean-task gradients via gradient alignment and injects trigger-based backdoors through targeted perturbations in the token embedding layer. To our knowledge, this is the first work to establish a systematic modeling and optimization framework for backdoor persistence across sequential fine-tuning stages. Theoretical analysis confirms the feasibility of our approach. Extensive experiments on Qwen2.5 and LLaMA3 models across diverse task sequences demonstrate that P-Trojan achieves >99% backdoor persistence while degrading original task accuracy by less than 0.5%, significantly outperforming existing methods.
π Abstract
Backdoor attacks embed malicious behaviors into Large Language Models (LLMs), enabling adversaries to trigger harmful outputs or bypass safety controls. However, the persistence of the implanted backdoors under user-driven post-deployment continual fine-tuning has been rarely examined. Most prior works evaluate the effectiveness and generalization of implanted backdoors only at releasing and empirical evidence shows that naively injected backdoor persistence degrades after updates. In this work, we study whether and how implanted backdoors persist through a multi-stage post-deployment fine-tuning. We propose P-Trojan, a trigger-based attack algorithm that explicitly optimizes for backdoor persistence across repeated updates. By aligning poisoned gradients with those of clean tasks on token embeddings, the implanted backdoor mapping is less likely to be suppressed or forgotten during subsequent updates. Theoretical analysis shows the feasibility of such persistent backdoor attacks after continual fine-tuning. And experiments conducted on the Qwen2.5 and LLaMA3 families of LLMs, as well as diverse task sequences, demonstrate that P-Trojan achieves over 99% persistence while preserving clean-task accuracy. Our findings highlight the need for persistence-aware evaluation and stronger defenses in realistic model adaptation pipelines.