🤖 AI Summary
To address the challenge of detecting clean-label backdoor attacks, this paper proposes a lightweight, general-purpose defense framework. It leverages off-the-shelf denoisers (e.g., BM3D, DnCNN) to generate image variants and exploits the error amplification effect—where backdoored samples exhibit heightened sensitivity to input perturbations during forward propagation—to quantify sample vulnerability and perform threshold-based filtering. This is the first work to harness error amplification for backdoor detection, enabling unified defense against both dirty-label and clean-label attacks without model retraining or label correction. Evaluated on CIFAR-10, CIFAR-100, and Tiny-ImageNet, the method achieves detection rates 12.6–28.3 percentage points higher than prior approaches, reduces attack success rates to below 3%, and preserves clean-sample accuracy above 92%, significantly outperforming state-of-the-art defenses.
📝 Abstract
Backdoor attacks are emerging threats to deep neural networks, which typically embed malicious behaviors into a victim model by injecting poisoned samples. Adversaries can activate the injected backdoor during inference by presenting the trigger on input images. Prior defensive methods have achieved remarkable success in countering dirty-label backdoor attacks where the labels of poisoned samples are often mislabeled. However, these approaches do not work for a recent new type of backdoor -- clean-label backdoor attacks that imperceptibly modify poisoned data and hold consistent labels. More complex and powerful algorithms are demanded to defend against such stealthy attacks. In this paper, we propose UltraClean, a general framework that simplifies the identification of poisoned samples and defends against both dirty-label and clean-label backdoor attacks. Given the fact that backdoor triggers introduce adversarial noise that intensifies in feed-forward propagation, UltraClean first generates two variants of training samples using off-the-shelf denoising functions. It then measures the susceptibility of training samples leveraging the error amplification effect in DNNs, which dilates the noise difference between the original image and denoised variants. Lastly, it filters out poisoned samples based on the susceptibility to thwart the backdoor implantation. Despite its simplicity, UltraClean achieves a superior detection rate across various datasets and significantly reduces the backdoor attack success rate while maintaining a decent model accuracy on clean data, outperforming existing defensive methods by a large margin. Code is available at https://github.com/bxz9200/UltraClean.