Revisiting the Auxiliary Data in Backdoor Purification

📅 2025-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Backdoor purification methods heavily rely on high-quality auxiliary datasets, yet such ideal data are often unavailable in practice, leading to significant performance degradation. This paper systematically evaluates the performance deterioration of existing methods under auxiliary datasets of varying quality and, for the first time, identifies distributional alignment between auxiliary and original training data as a critical factor governing purification efficacy. To address this, we propose Gradient-guided Input Calibration (GIC), a victim-model-driven, learnable input transformation mechanism: it requires no auxiliary labels, leverages gradient feedback to optimize differentiable input perturbations, and adaptively aligns auxiliary data distributions with those of the clean training data; moreover, it supports robust training with heterogeneous, multi-source auxiliary data. Evaluated on CIFAR-10/100 and Tiny-ImageNet, GIC improves average purification success rates by 23.6% across diverse auxiliary data qualities—substantially outperforming state-of-the-art methods—and generalizes effectively across various backdoor attacks and model architectures.

Technology Category

Application Category

📝 Abstract
Backdoor attacks occur when an attacker subtly manipulates machine learning models during the training phase, leading to unintended behaviors when specific triggers are present. To mitigate such emerging threats, a prevalent strategy is to cleanse the victim models by various backdoor purification techniques. Despite notable achievements, current state-of-the-art (SOTA) backdoor purification techniques usually rely on the availability of a small clean dataset, often referred to as auxiliary dataset. However, acquiring an ideal auxiliary dataset poses significant challenges in real-world applications. This study begins by assessing the SOTA backdoor purification techniques across different types of real-world auxiliary datasets. Our findings indicate that the purification effectiveness fluctuates significantly depending on the type of auxiliary dataset used. Specifically, a high-quality in-distribution auxiliary dataset is essential for effective purification, whereas datasets from varied or out-of-distribution sources significantly degrade the defensive performance. Based on this, we propose Guided Input Calibration (GIC), which aims to improve purification efficacy by employing a learnable transformation. Guided by the victim model itself, GIC aligns the characteristics of the auxiliary dataset with those of the original training set. Comprehensive experiments demonstrate that GIC can substantially enhance purification performance across diverse types of auxiliary datasets. The code and data will be available via https://github.com/shawkui/BackdoorBenchER.
Problem

Research questions and friction points this paper is trying to address.

Backdoor attacks manipulate machine learning models subtly.
Current purification requires high-quality auxiliary datasets.
Proposed GIC improves purification with learnable transformations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Guided Input Calibration (GIC)
Learnable transformation
Aligns auxiliary dataset characteristics
🔎 Similar Papers
No similar papers found.