Taught Well Learned Ill: Towards Distillation-conditional Backdoor Attack

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies a novel security threat in knowledge distillation (KD)—Distillation-Conditional Backdoor Attack (DCBA): an attacker implants a dormant backdoor into the teacher model that activates and transfers to the student model solely during standard KD using only clean data. To circumvent existing defenses, we propose the first stealthy attack framework based on bilevel optimization, integrating implicit differentiation and pre-optimized trigger injection to enable efficient attack construction in unsupervised or label-free settings. Extensive experiments demonstrate that DCBA successfully evades state-of-the-art backdoor detectors across diverse datasets, model architectures, and distillation methods. The implementation is publicly available. This work systematically uncovers a long-overlooked conditional dependency vulnerability in KD—where backdoors propagate only under specific distillation conditions—thereby providing a critical security alert for trustworthy model compression.

Technology Category

Application Category

📝 Abstract
Knowledge distillation (KD) is a vital technique for deploying deep neural networks (DNNs) on resource-constrained devices by transferring knowledge from large teacher models to lightweight student models. While teacher models from third-party platforms may undergo security verification (eg, backdoor detection), we uncover a novel and critical threat: distillation-conditional backdoor attacks (DCBAs). DCBA injects dormant and undetectable backdoors into teacher models, which become activated in student models via the KD process, even with clean distillation datasets. While the direct extension of existing methods is ineffective for DCBA, we implement this attack by formulating it as a bilevel optimization problem and proposing a simple yet effective method (ie, SCAR). Specifically, the inner optimization simulates the KD process by optimizing a surrogate student model, while the outer optimization leverages outputs from this surrogate to optimize the teacher model for implanting the conditional backdoor. Our SCAR addresses this complex optimization utilizing an implicit differentiation algorithm with a pre-optimized trigger injection function. Extensive experiments across diverse datasets, model architectures, and KD techniques validate the effectiveness of our SCAR and its resistance against existing backdoor detection, highlighting a significant yet previously overlooked vulnerability in the KD process. Our code is available at https://github.com/WhitolfChen/SCAR.
Problem

Research questions and friction points this paper is trying to address.

DCBA injects dormant backdoors in teacher models activated during distillation
SCAR method implements attack via bilevel optimization and implicit differentiation
Attack bypasses existing detection methods across diverse datasets and architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bilevel optimization for backdoor implantation
Implicit differentiation with trigger injection
Surrogate student simulation for distillation conditioning
🔎 Similar Papers
No similar papers found.
Yukun Chen
Yukun Chen
Pieces Technologies Inc.
Natural Language Processing
Boheng Li
Boheng Li
Nanyang Technological University
AI SecurityWatermarkingBackdoor AttackCopyright Protection
Y
Yu Yuan
State Key Laboratory of Blockchain and Data Security, Zhejiang University
L
Leyi Qi
State Key Laboratory of Blockchain and Data Security, Zhejiang University
Y
Yiming Li
Nanyang Technological University
T
Tianwei Zhang
Nanyang Technological University
Zhan Qin
Zhan Qin
Researcher, Zhejiang University
Data Security and PrivacyAI Security
Kui Ren
Kui Ren
Professor and Dean of Computer Science, Zhejiang University, ACM/IEEE Fellow
Data Security & PrivacyAI SecurityIoT & Vehicular Security