Backdoor Attacks on Decentralised Post-Training

📅 2026-03-31
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of decentralized large language model post-training to backdoor attacks, particularly due to the absence of defenses in pipeline parallelism. It proposes a novel attack strategy wherein malicious participants manipulate intermediate pipeline stages to inject trigger tokens that subvert the model’s alignment mechanism, steering outputs away from intended behaviors. Unlike conventional data poisoning approaches that rely solely on input-space contamination, this method exploits mid-layer manipulations, circumventing traditional assumptions. Experimental results demonstrate that the attack drastically reduces alignment success rates from 80% to 6%, and remains effective in 60% of cases even after safety-aligned fine-tuning, revealing critical security gaps in current decentralized training paradigms regarding intermediate-layer integrity.
📝 Abstract
Decentralised post-training of large language models utilises data and pipeline parallelism techniques to split the data and the model. Unfortunately, decentralised post-training can be vulnerable to poisoning and backdoor attacks by one or more malicious participants. There have been several works on attacks and defenses against decentralised data parallelism or federated learning. However, existing works on the robustness of pipeline parallelism are limited to poisoning attacks. To the best of our knowledge, this paper presents the first backdoor attack on pipeline parallelism, designed to misalign the trained model. In our setup, the adversary controls an intermediate stage of the pipeline rather than the whole model or the dataset, making existing attacks, such as data poisoning, inapplicable. Our experimental results show that even such a limited adversary can inject the backdoor and cause misalignment of the model during post-training, independent of the learned domain or dataset. With our attack, the inclusion of the trigger word reduces the alignment percentage from $80\%$ to $6\%$. We further test the robustness of our attack by applying safety alignment training on the final model, and demonstrate that our backdoor attack still succeeds in $60\%$ of cases.
Problem

Research questions and friction points this paper is trying to address.

Backdoor Attacks
Decentralised Post-Training
Pipeline Parallelism
Model Misalignment
Adversarial Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

backdoor attack
pipeline parallelism
decentralised post-training
model misalignment
adversarial robustness
🔎 Similar Papers
No similar papers found.