Variance-Based Defense Against Blended Backdoor Attacks

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of defending against composite backdoor attacks in the absence of clean data, this paper proposes the first clean-data-free, variance-driven backdoor detection framework. Our method analyzes intra-class and inter-class feature variances after model training to identify poisoned classes and localize trigger-critical regions; it further achieves precise detection via trigger saliency modeling and reweighting-based filtering of poisoned samples. The key innovation lies in explicitly disentangling and revealing the harmful substructures of triggers, significantly enhancing both interpretability and practicality. Evaluated on CIFAR-10 and Tiny-ImageNet, our approach achieves a 12.7% higher detection rate than SCAn, ABL, and AGPD, while reducing the false positive rate to 3.2%.

Technology Category

Application Category

📝 Abstract
Backdoor attacks represent a subtle yet effective class of cyberattacks targeting AI models, primarily due to their stealthy nature. The model behaves normally on clean data but exhibits malicious behavior only when the attacker embeds a specific trigger into the input. This attack is performed during the training phase, where the adversary corrupts a small subset of the training data by embedding a pattern and modifying the labels to a chosen target. The objective is to make the model associate the pattern with the target label while maintaining normal performance on unaltered data. Several defense mechanisms have been proposed to sanitize training data-sets. However, these methods often rely on the availability of a clean dataset to compute statistical anomalies, which may not always be feasible in real-world scenarios where datasets can be unavailable or compromised. To address this limitation, we propose a novel defense method that trains a model on the given dataset, detects poisoned classes, and extracts the critical part of the attack trigger before identifying the poisoned instances. This approach enhances explainability by explicitly revealing the harmful part of the trigger. The effectiveness of our method is demonstrated through experimental evaluations on well-known image datasets and comparative analysis against three state-of-the-art algorithms: SCAn, ABL, and AGPD.
Problem

Research questions and friction points this paper is trying to address.

Detecting backdoor attacks in AI models without clean data
Identifying poisoned classes and extracting attack triggers
Improving explainability by revealing harmful trigger parts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Trains model to detect poisoned classes
Extracts critical part of attack trigger
Enhances explainability by revealing harmful triggers
🔎 Similar Papers
No similar papers found.
Sujeevan Aseervatham
Sujeevan Aseervatham
Orange Innovation
Machine LearningNLP
A
Achraf Kerzazi
Orange Research, Châtillon, France; LaMSN - La Maison des Sciences Numériques, F-93210, Plaine Saint-Denis - France
Younès Bennani
Younès Bennani
Professor, Sorbonne Paris Nord University
Machine Learning:Collaborative learningTransfer learningMultimodal Learning