🤖 AI Summary
This study uncovers a novel security threat to large language models (LLMs) under synthetic data training: conventional data poisoning and backdoor attacks suffer from limited transferability due to distributional shift in synthetic data. To address this, we propose VIA (Viral Injection Attack), the first viral-style attack framework targeting the synthetic data generation pipeline. Inspired by computer virus mechanisms, VIA employs three core components: (1) shell-based payload encapsulation, (2) dynamic search for optimal hijacking points in the generation process, and (3) distribution-alignment modeling to ensure stealthy yet effective malicious behavior amplification during synthetic data creation. Experiments demonstrate that VIA significantly increases the prevalence of poisoned content in synthetic datasets, enabling downstream models to achieve attack success rates comparable to those on upstream models. This work provides the first empirical evidence of systemic exploitability in the synthetic data supply chain, offering both a critical security warning and a new benchmark for evaluating LLM safety in data-synthetic training paradigms.
📝 Abstract
Synthetic data refers to artificial samples generated by models. While it has been validated to significantly enhance the performance of large language models (LLMs) during training and has been widely adopted in LLM development, potential security risks it may introduce remain uninvestigated. This paper systematically evaluates the resilience of synthetic-data-integrated training paradigm for LLMs against mainstream poisoning and backdoor attacks. We reveal that such a paradigm exhibits strong resistance to existing attacks, primarily thanks to the different distribution patterns between poisoning data and queries used to generate synthetic samples. To enhance the effectiveness of these attacks and further investigate the security risks introduced by synthetic data, we introduce a novel and universal attack framework, namely, Virus Infection Attack (VIA), which enables the propagation of current attacks through synthetic data even under purely clean queries. Inspired by the principles of virus design in cybersecurity, VIA conceals the poisoning payload within a protective "shell" and strategically searches for optimal hijacking points in benign samples to maximize the likelihood of generating malicious content. Extensive experiments on both data poisoning and backdoor attacks show that VIA significantly increases the presence of poisoning content in synthetic data and correspondingly raises the attack success rate (ASR) on downstream models to levels comparable to those observed in the poisoned upstream models.