Data-Chain Backdoor: Do You Trust Diffusion Models as Generative Data Supplier?

πŸ“… 2025-12-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study identifies a novel backdoor propagation riskβ€”Data-Chain Backdoor (DCB)β€”in generative data supply chains, wherein open-source diffusion models implicitly memorize and reproduce backdoor triggers during synthetic data generation, thereby contaminating downstream models. Method: We formally define DCB and discover Early-Stage Trigger Manifestation: triggers become more salient in the high-noise early stages of reverse diffusion sampling. Our analysis combines reverse-process inspection, trigger visualization tracking, clean-label attack modeling, and downstream robustness evaluation. Contribution/Results: We demonstrate that multiple diffusion architectures stably propagate backdoors, achieving >92% attack success rates while degrading synthetic data classification accuracy by <1%. This work is the first to systematically expose the stealthiness and severity of diffusion models as backdoor carriers, establishing foundational theoretical insights and an evaluation framework for generative AI supply-chain security.

Technology Category

Application Category

πŸ“ Abstract
The increasing use of generative models such as diffusion models for synthetic data augmentation has greatly reduced the cost of data collection and labeling in downstream perception tasks. However, this new data source paradigm may introduce important security concerns. This work investigates backdoor propagation in such emerging generative data supply chains, namely Data-Chain Backdoor (DCB). Specifically, we find that open-source diffusion models can become hidden carriers of backdoors. Their strong distribution-fitting ability causes them to memorize and reproduce backdoor triggers during generation, which are subsequently inherited by downstream models, resulting in severe security risks. This threat is particularly concerning under clean-label attack scenarios, as it remains effective while having negligible impact on the utility of the synthetic data. Furthermore, we discover an Early-Stage Trigger Manifestation (ESTM) phenomenon: backdoor trigger patterns tend to surface more explicitly in the early, high-noise stages of the diffusion model's reverse generation process before being subtly integrated into the final samples. Overall, this work reveals a previously underexplored threat in generative data pipelines and provides initial insights toward mitigating backdoor risks in synthetic data generation.
Problem

Research questions and friction points this paper is trying to address.

Investigates backdoor propagation in generative data supply chains
Reveals diffusion models can memorize and reproduce hidden backdoor triggers
Discovers early-stage trigger manifestation in reverse diffusion generation process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Diffusion models memorize and reproduce backdoor triggers
Backdoor triggers manifest early in reverse generation process
Clean-label attacks maintain synthetic data utility while effective
πŸ”Ž Similar Papers
No similar papers found.
J
Junchi Lu
University of California, Irvine
X
Xinke Li
City University of Hong Kong
Yuheng Liu
Yuheng Liu
PhD Student, UC Irvine
Computer Vision3D VisionGenerative Models
Q
Qi Alfred Chen
University of California, Irvine