Towards Backdoor Stealthiness in Model Parameter Space

πŸ“… 2025-01-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work identifies the parameter space as a common vulnerability surface for backdoor attacks, revealing significant detectability blind spots in existing input- and feature-space stealthy attacks along this dimension. To address this, we propose Grondβ€”the first supply-chain backdoor attack framework jointly optimizing stealth across input, feature, and parameter spaces. Its core is the Adaptive Backdoor Injection (ABI) mechanism, which constrains weight perturbations during backdoor embedding to enhance model-level stealth. Extensive evaluations on CIFAR-10, GTSRB, and an ImageNet subset demonstrate that Grond consistently outperforms 12 state-of-the-art backdoor methods. It maintains high attack success rates against all 17 evaluated adaptive defenses, including those explicitly targeting parameter-space anomalies. Moreover, the ABI module is modular and plug-and-play, significantly improving parameter-space robustness of diverse existing backdoor attacks without architectural modification.

Technology Category

Application Category

πŸ“ Abstract
Recent research on backdoor stealthiness focuses mainly on indistinguishable triggers in input space and inseparable backdoor representations in feature space, aiming to circumvent backdoor defenses that examine these respective spaces. However, existing backdoor attacks are typically designed to resist a specific type of backdoor defense without considering the diverse range of defense mechanisms. Based on this observation, we pose a natural question: Are current backdoor attacks truly a real-world threat when facing diverse practical defenses? To answer this question, we examine 12 common backdoor attacks that focus on input-space or feature-space stealthiness and 17 diverse representative defenses. Surprisingly, we reveal a critical blind spot: Backdoor attacks designed to be stealthy in input and feature spaces can be mitigated by examining backdoored models in parameter space. To investigate the underlying causes behind this common vulnerability, we study the characteristics of backdoor attacks in the parameter space. Notably, we find that input- and feature-space attacks introduce prominent backdoor-related neurons in parameter space, which are not thoroughly considered by current backdoor attacks. Taking comprehensive stealthiness into account, we propose a novel supply-chain attack called Grond. Grond limits the parameter changes by a simple yet effective module, Adversarial Backdoor Injection (ABI), which adaptively increases the parameter-space stealthiness during the backdoor injection. Extensive experiments demonstrate that Grond outperforms all 12 backdoor attacks against state-of-the-art (including adaptive) defenses on CIFAR-10, GTSRB, and a subset of ImageNet. In addition, we show that ABI consistently improves the effectiveness of common backdoor attacks.
Problem

Research questions and friction points this paper is trying to address.

Backdoor Attack
Parameter Space
Stealthiness Enhancement
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adversarial Backdoor Injection
Parameter Space
Enhanced Backdoor Effectiveness
πŸ”Ž Similar Papers
No similar papers found.
Xiaoyun Xu
Xiaoyun Xu
Radboud University
AI security
Z
Zhuoran Liu
Radboud University Nijmegen, The Netherlands
Stefanos Koffas
Stefanos Koffas
Ph.D. candidate, Delft University of Technology
AI SecurityBackdoor Attacks
S
S. Picek
Radboud University Nijmegen, The Netherlands