π€ AI Summary
This work identifies the parameter space as a common vulnerability surface for backdoor attacks, revealing significant detectability blind spots in existing input- and feature-space stealthy attacks along this dimension. To address this, we propose Grondβthe first supply-chain backdoor attack framework jointly optimizing stealth across input, feature, and parameter spaces. Its core is the Adaptive Backdoor Injection (ABI) mechanism, which constrains weight perturbations during backdoor embedding to enhance model-level stealth. Extensive evaluations on CIFAR-10, GTSRB, and an ImageNet subset demonstrate that Grond consistently outperforms 12 state-of-the-art backdoor methods. It maintains high attack success rates against all 17 evaluated adaptive defenses, including those explicitly targeting parameter-space anomalies. Moreover, the ABI module is modular and plug-and-play, significantly improving parameter-space robustness of diverse existing backdoor attacks without architectural modification.
π Abstract
Recent research on backdoor stealthiness focuses mainly on indistinguishable triggers in input space and inseparable backdoor representations in feature space, aiming to circumvent backdoor defenses that examine these respective spaces. However, existing backdoor attacks are typically designed to resist a specific type of backdoor defense without considering the diverse range of defense mechanisms. Based on this observation, we pose a natural question: Are current backdoor attacks truly a real-world threat when facing diverse practical defenses? To answer this question, we examine 12 common backdoor attacks that focus on input-space or feature-space stealthiness and 17 diverse representative defenses. Surprisingly, we reveal a critical blind spot: Backdoor attacks designed to be stealthy in input and feature spaces can be mitigated by examining backdoored models in parameter space. To investigate the underlying causes behind this common vulnerability, we study the characteristics of backdoor attacks in the parameter space. Notably, we find that input- and feature-space attacks introduce prominent backdoor-related neurons in parameter space, which are not thoroughly considered by current backdoor attacks. Taking comprehensive stealthiness into account, we propose a novel supply-chain attack called Grond. Grond limits the parameter changes by a simple yet effective module, Adversarial Backdoor Injection (ABI), which adaptively increases the parameter-space stealthiness during the backdoor injection. Extensive experiments demonstrate that Grond outperforms all 12 backdoor attacks against state-of-the-art (including adaptive) defenses on CIFAR-10, GTSRB, and a subset of ImageNet. In addition, we show that ABI consistently improves the effectiveness of common backdoor attacks.