🤖 AI Summary
Deep neural networks are vulnerable to backdoor attacks, and existing defenses often fail to simultaneously suppress backdoors and preserve clean-task performance. To address this, we propose Class-Conditional Neural Polarization Defense (CNPD), a lightweight, target-label-agnostic backdoor mitigation method. Its core innovation is the first-ever class-conditional neural polarization mechanism: it leverages predicted labels to guide adaptive feature purification, integrating attention-guided filtering, embedded class encoding, and a bilayer-optimized linear transformation. We design three scalable variants—r-CNPD, e-CNPD, and a-CNPD—that incur negligible deployment overhead (<0.1% parameter increase). Evaluated on benchmarks including CIFAR-10, CNPD reduces attack success rate (ASR) to <1.5% while maintaining benign accuracy ≥99%, achieving efficient and robust defense against diverse backdoor threats.
📝 Abstract
Recent studies have highlighted the vulnerability of deep neural networks to backdoor attacks, where models are manipulated to rely on embedded triggers within poisoned samples, despite the presence of both benign and trigger information. While several defense methods have been proposed, they often struggle to balance backdoor mitigation with maintaining benign performance.In this work, inspired by the concept of optical polarizer-which allows light waves of specific polarizations to pass while filtering others-we propose a lightweight backdoor defense approach, NPD. This method integrates a neural polarizer (NP) as an intermediate layer within the compromised model, implemented as a lightweight linear transformation optimized via bi-level optimization. The learnable NP filters trigger information from poisoned samples while preserving benign content. Despite its effectiveness, we identify through empirical studies that NPD's performance degrades when the target labels (required for purification) are inaccurately estimated. To address this limitation while harnessing the potential of targeted adversarial mitigation, we propose class-conditional neural polarizer-based defense (CNPD). The key innovation is a fusion module that integrates the backdoored model's predicted label with the features to be purified. This architecture inherently mimics targeted adversarial defense mechanisms without requiring label estimation used in NPD. We propose three implementations of CNPD: the first is r-CNPD, which trains a replicated NP layer for each class and, during inference, selects the appropriate NP layer for defense based on the predicted class from the backdoored model. To efficiently handle a large number of classes, two variants are designed: e-CNPD, which embeds class information as additional features, and a-CNPD, which directs network attention using class information.