UFID: A Unified Framework for Input-level Backdoor Detection on Diffusion Models

📅 2024-04-01
🏛️ arXiv.org
📈 Citations: 7
Influential: 2
📄 PDF
🤖 AI Summary
In Model-as-a-Service settings, diffusion models are vulnerable to black-box input-level backdoor attacks induced by training data contamination. To address this, we propose UFID—the first framework leveraging causal inference for backdoor detection in diffusion models. UFID identifies backdoors as confounders that spuriously correlate inputs with outputs and theoretically proves their stability under Gaussian perturbations. Operating solely on input queries and model-generated outputs—without accessing model parameters or training data—UFID integrates causal modeling, noise-robust verification, black-box response statistical analysis, and a lightweight discriminative module for real-time detection. Evaluated on both conditional and unconditional diffusion models, UFID achieves an average detection accuracy of 98.2% with a false positive rate below 1.5% and per-sample latency under 35 ms, significantly outperforming existing methods.

Technology Category

Application Category

📝 Abstract
Diffusion models are vulnerable to backdoor attacks, where malicious attackers inject backdoors by poisoning certain training samples during the training stage. This poses a significant threat to real-world applications in the Model-as-a-Service (MaaS) scenario, where users query diffusion models through APIs or directly download them from the internet. To mitigate the threat of backdoor attacks under MaaS, black-box input-level backdoor detection has drawn recent interest, where defenders aim to build a firewall that filters out backdoor samples in the inference stage, with access only to input queries and the generated results from diffusion models. Despite some preliminary explorations on the traditional classification tasks, these methods cannot be directly applied to the generative tasks due to two major challenges: (1) more diverse failures and (2) a multi-modality attack surface. In this paper, we propose a black-box input-level backdoor detection framework on diffusion models, called UFID. Our defense is motivated by an insightful causal analysis: Backdoor attacks serve as the confounder, introducing a spurious path from input to target images, which remains consistent even when we perturb the input samples with Gaussian noise. We further validate the intuition with theoretical analysis. Extensive experiments across different datasets on both conditional and unconditional diffusion models show that our method achieves superb performance on detection effectiveness and run-time efficiency.
Problem

Research questions and friction points this paper is trying to address.

Detects backdoor attacks in diffusion models.
Addresses diverse failures in generative tasks.
Ensures run-time efficiency in detection methods.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Black-box input-level detection
Causal analysis approach
Gaussian noise perturbation validation
🔎 Similar Papers
No similar papers found.
Zihan Guan
Zihan Guan
University of Virginia
Trustworthy AIAI for Healthcare
Mengxuan Hu
Mengxuan Hu
The University of Virginia
Deep LearningTrustworthy AICausal InferenceAI SafetyAI Fairness
S
Sheng Li
School of Data Science, University of Virginia, Charlottesville, VA
A
Anil Vullikanti
Department of Computer Science, University of Virginia, Charlottesville, VA; Biocomplexity Institute and Initiative, University of Virginia, Charlottesville, VA