๐ค AI Summary
This study addresses the vulnerability of large language models to backdoor attacks under a white-box threat model, where adversaries can inject syntactic or semantic triggers via high-ratio data poisoning to induce topic-specific biases. It presents the first systematic evaluation of the effectiveness of these two trigger types in eliciting positive and negative biases, alongside an assessment of both endogenous (e.g., fine-tuning) and exogenous (e.g., input filtering) defense strategies. Based on over 1,000 experiments, the findings reveal that semantic triggers are more potent in inducing negative biases, and existing defenses struggle to balance model utility and securityโeither significantly degrading performance or incurring prohibitive computational overhead. This work highlights the distinct risks posed by semantic backdoors and offers new insights into the design of robust attack and defense mechanisms.
๐ Abstract
Large language models (LLMs) are increasingly deployed in settings where inducing a bias toward a certain topic can have significant consequences, and backdoor attacks can be used to produce such models. Prior work on backdoor attacks has largely focused on a black-box threat model, with an adversary targeting the model builder's LLM. However, in the bias manipulation setting, the model builder themselves could be the adversary, warranting a white-box threat model where the attacker's ability to poison, and manipulate the poisoned data is substantially increased. Furthermore, despite growing research in semantically-triggered backdoors, most studies have limited themselves to syntactically-triggered attacks. Motivated by these limitations, we conduct an analysis consisting of over 1000 evaluations using higher poisoning ratios and greater data augmentation to gain a better understanding of the potential of syntactically- and semantically-triggered backdoor attacks in a white-box setting. In addition, we study whether two representative defense paradigms, model-intrinsic and model-extrinsic backdoor removal, are able to mitigate these attacks. Our analysis reveals numerous new findings. We discover that while both syntactically- and semantically-triggered attacks can effectively induce the target behaviour, and largely preserve utility, semantically-triggered attacks are generally more effective in inducing negative biases, while both backdoor types struggle with causing positive biases. Furthermore, while both defense types are able to mitigate these backdoors, they either result in a substantial drop in utility, or require high computational overhead.