🤖 AI Summary
Existing distribution-based adversarial sample detection (SADD) methods discard suspicious samples outright, leading to loss of clean information and degraded model utility.
Method: We propose a two-stage “detect–purify” defense framework. During training, we jointly optimize the Maximum Mean Discrepancy (MMD) test’s discriminative power and a distribution-aligned denoiser; during inference, samples are first classified and then routed—clean inputs retain original predictions, while adversarial ones undergo real-time purification.
Contribution/Results: We introduce the first dual-branch collaborative paradigm, with theoretical proof that minimizing distribution discrepancy reduces the expected adversarial risk. Evaluated on CIFAR-10 and ImageNet-1K, our method significantly outperforms state-of-the-art approaches, simultaneously improving both clean accuracy and robust accuracy. It remains highly effective against strong adaptive white-box attacks, demonstrating superior generalization and practicality.
📝 Abstract
Statistical adversarial data detection (SADD) detects whether an upcoming batch contains adversarial examples (AEs) by measuring the distributional discrepancies between clean examples (CEs) and AEs. In this paper, we reveal the potential strength of SADD-based methods by theoretically showing that minimizing distributional discrepancy can help reduce the expected loss on AEs. Nevertheless, despite these advantages, SADD-based methods have a potential limitation: they discard inputs that are detected as AEs, leading to the loss of clean information within those inputs. To address this limitation, we propose a two-pronged adversarial defense method, named Distributional-Discrepancy-based Adversarial Defense (DDAD). In the training phase, DDAD first optimizes the test power of the maximum mean discrepancy (MMD) to derive MMD-OPT, and then trains a denoiser by minimizing the MMD-OPT between CEs and AEs. In the inference phase, DDAD first leverages MMD-OPT to differentiate CEs and AEs, and then applies a two-pronged process: (1) directly feeding the detected CEs into the classifier, and (2) removing noise from the detected AEs by the distributional-discrepancy-based denoiser. Extensive experiments show that DDAD outperforms current state-of-the-art (SOTA) defense methods by notably improving clean and robust accuracy on CIFAR-10 and ImageNet-1K against adaptive white-box attacks.