🤖 AI Summary
This work addresses the challenges of poor cross-task transferability and low fine-tuning robustness of backdoor attacks in pre-trained model (PTM) supply chains. We propose TransTroj, the first framework to formalize PTM backdoors as a *dual indistinguishability problem* in the embedding space—enforcing indistinguishability both before and after task-specific fine-tuning—thereby enabling task-agnostic, fine-tuning-robust, supply-chain-level backdoor propagation. Methodologically, TransTroj employs a two-stage co-optimization: joint optimization of trigger generation and victim model fine-tuning, augmented by a contrastive learning–based embedding alignment strategy compatible with diverse PTMs (e.g., BERT, RoBERTa) and downstream tasks. Evaluated across four PTMs and six downstream tasks, TransTroj achieves near 100% average attack success rate—substantially outperforming state-of-the-art methods—while maintaining strong robustness against common model perturbations, including pruning, quantization, and retraining.
📝 Abstract
Pre-trained models (PTMs) are widely adopted across various downstream tasks in the machine learning supply chain. Adopting untrustworthy PTMs introduces significant security risks, where adversaries can poison the model supply chain by embedding hidden malicious behaviors (backdoors) into PTMs. However, existing backdoor attacks to PTMs can only achieve partially task-agnostic and the embedded backdoors are easily erased during the fine-tuning process. This makes it challenging for the backdoors to persist and propagate through the supply chain. In this paper, we propose a novel and severer backdoor attack, TransTroj, which enables the backdoors embedded in PTMs to efficiently transfer in the model supply chain. In particular, we first formalize this attack as an indistinguishability problem between poisoned and clean samples in the embedding space. We decompose embedding indistinguishability into pre- and post-indistinguishability, representing the similarity of the poisoned and reference embeddings before and after the attack. Then, we propose a two-stage optimization that separately optimizes triggers and victim PTMs to achieve embedding indistinguishability. We evaluate TransTroj on four PTMs and six downstream tasks. Experimental results show that our method significantly outperforms SOTA task-agnostic backdoor attacks -- achieving nearly 100% attack success rate on most downstream tasks -- and demonstrates robustness under various system settings. Our findings underscore the urgent need to secure the model supply chain against such transferable backdoor attacks. The code is available at https://github.com/haowang-cqu/TransTroj .