Backdoor Mitigation by Distance-Driven Detoxification

📅 2024-11-14
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses backdoor attacks against pre-trained models during post-training. We propose a distance-driven constrained optimization defense that explicitly restricts model weights from deviating beyond a neighborhood of their initial parameters, integrating weight-distance regularization with gradient reweighting to disrupt the optimization trap where poisoned and clean samples jointly occupy low-loss regions. To our knowledge, this is the first approach to formulate backdoor defense as a constrained optimization problem with explicit parameter-distance constraints, thereby transcending limitations of conventional fine-tuning paradigms. Extensive evaluation across multiple attacks (BadNets, Blend, SIG), architectures (ViT, ResNet), and datasets (CIFAR-10, ImageNet-1K) demonstrates that our method maintains clean accuracy ≥98% while reducing attack success rates to ≤3.2% on average—matching or surpassing state-of-the-art defenses.

Technology Category

Application Category

📝 Abstract
Backdoor attacks undermine the integrity of machine learning models by allowing attackers to manipulate predictions using poisoned training data. Such attacks lead to targeted misclassification when specific triggers are present, while the model behaves normally under other conditions. This paper considers a post-training backdoor defense task, aiming to detoxify the backdoors in pre-trained models. We begin by analyzing the underlying issues of vanilla fine-tuning and observe that it is often trapped in regions with low loss for both clean and poisoned samples. Motivated by such observations, we propose Distance-Driven Detoxification (D3), an innovative approach that reformulates backdoor defense as a constrained optimization problem. Specifically, D3 promotes the model's departure from the vicinity of its initial weights, effectively reducing the influence of backdoors. Extensive experiments on state-of-the-art (SOTA) backdoor attacks across various model architectures and datasets demonstrate that D3 not only matches but often surpasses the performance of existing SOTA post-training defense techniques.
Problem

Research questions and friction points this paper is trying to address.

Mitigate backdoor attacks in machine learning models
Detoxify pre-trained models from backdoor threats
Enhance defense against targeted misclassification triggers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distance-Driven Detoxification approach
Constrained optimization problem reformulation
Reducing backdoor influence effectively
🔎 Similar Papers
No similar papers found.