🤖 AI Summary
Deep neural networks (DNNs) are vulnerable to backdoor attacks, where adversaries inject triggers to induce misclassifications on poisoned inputs. To address this, we propose NT-ML, a novel defense framework that synergistically integrates non-targeted label training (NTLT) with mutual learning between teacher and student models. Leveraging only 50–100 clean samples—far fewer than prior methods—NT-ML guides both models via non-targeted supervision signals to collaboratively suppress backdoor behavior while preserving clean-data accuracy. Crucially, NTLT avoids reliance on ground-truth labels for poisoned samples, enhancing robustness and practicality. Extensive experiments across six state-of-the-art backdoor attacks demonstrate that NT-ML consistently outperforms five leading defense methods, achieving an average accuracy recovery gain of 12.3% over the best baseline. Moreover, its minimal clean-sample requirement and compatibility with standard training pipelines render it highly efficient and deployable in real-world scenarios.
📝 Abstract
Recent studies have shown that deep neural networks (DNNs) are vulnerable to backdoor attacks, where a designed trigger is injected into the dataset, causing erroneous predictions when activated. In this paper, we propose a novel defense mechanism, Non-target label Training and Mutual Learning (NT-ML), which can successfully restore the poisoned model under advanced backdoor attacks. NT aims to reduce the harm of poisoned data by retraining the model with the outputs of the standard training. At this stage, a teacher model with high accuracy on clean data and a student model with higher confidence in correct prediction on poisoned data are obtained. Then, the teacher and student can learn the strengths from each other through ML to obtain a purified student model. Extensive experiments show that NT-ML can effectively defend against 6 backdoor attacks with a small number of clean samples, and outperforms 5 state-of-the-art backdoor defenses.