🤖 AI Summary
In Model-as-a-Service settings, diffusion models are vulnerable to black-box input-level backdoor attacks induced by training data contamination. To address this, we propose UFID—the first framework leveraging causal inference for backdoor detection in diffusion models. UFID identifies backdoors as confounders that spuriously correlate inputs with outputs and theoretically proves their stability under Gaussian perturbations. Operating solely on input queries and model-generated outputs—without accessing model parameters or training data—UFID integrates causal modeling, noise-robust verification, black-box response statistical analysis, and a lightweight discriminative module for real-time detection. Evaluated on both conditional and unconditional diffusion models, UFID achieves an average detection accuracy of 98.2% with a false positive rate below 1.5% and per-sample latency under 35 ms, significantly outperforming existing methods.
📝 Abstract
Diffusion models are vulnerable to backdoor attacks, where malicious attackers inject backdoors by poisoning certain training samples during the training stage. This poses a significant threat to real-world applications in the Model-as-a-Service (MaaS) scenario, where users query diffusion models through APIs or directly download them from the internet. To mitigate the threat of backdoor attacks under MaaS, black-box input-level backdoor detection has drawn recent interest, where defenders aim to build a firewall that filters out backdoor samples in the inference stage, with access only to input queries and the generated results from diffusion models. Despite some preliminary explorations on the traditional classification tasks, these methods cannot be directly applied to the generative tasks due to two major challenges: (1) more diverse failures and (2) a multi-modality attack surface. In this paper, we propose a black-box input-level backdoor detection framework on diffusion models, called UFID. Our defense is motivated by an insightful causal analysis: Backdoor attacks serve as the confounder, introducing a spurious path from input to target images, which remains consistent even when we perturb the input samples with Gaussian noise. We further validate the intuition with theoretical analysis. Extensive experiments across different datasets on both conditional and unconditional diffusion models show that our method achieves superb performance on detection effectiveness and run-time efficiency.