BAPFL: Exploring Backdoor Attacks Against Prototype-based Federated Learning

📅 2025-09-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prototype-based federated learning (PFL) mitigates data heterogeneity but lacks systematic investigation into its robustness against backdoor attacks. This paper first reveals PFL’s intrinsic resilience to conventional backdoor attacks and proposes BAPFL—the first backdoor attack specifically designed for PFL. BAPFL innovatively integrates label-customized triggers with a global prototype alignment mechanism: it poisons global prototypes by manipulating their evolutionary trajectory and employs gradient-driven trigger optimization to steer malicious samples toward the target-class prototype in feature space. Extensive experiments across multiple benchmark datasets and PFL variants demonstrate that BAPFL achieves a 35–75% improvement in attack success rate over baselines, while preserving main-task accuracy—thereby validating its effectiveness, stealthiness, and generalizability.

Technology Category

Application Category

📝 Abstract
Prototype-based federated learning (PFL) has emerged as a promising paradigm to address data heterogeneity problems in federated learning, as it leverages mean feature vectors as prototypes to enhance model generalization. However, its robustness against backdoor attacks remains largely unexplored. In this paper, we identify that PFL is inherently resistant to existing backdoor attacks due to its unique prototype learning mechanism and local data heterogeneity. To further explore the security of PFL, we propose BAPFL, the first backdoor attack method specifically designed for PFL frameworks. BAPFL integrates a prototype poisoning strategy with a trigger optimization mechanism. The prototype poisoning strategy manipulates the trajectories of global prototypes to mislead the prototype training of benign clients, pushing their local prototypes of clean samples away from the prototypes of trigger-embedded samples. Meanwhile, the trigger optimization mechanism learns a unique and stealthy trigger for each potential target label, and guides the prototypes of trigger-embedded samples to align closely with the global prototype of the target label. Experimental results across multiple datasets and PFL variants demonstrate that BAPFL achieves a 35%-75% improvement in attack success rate compared to traditional backdoor attacks, while preserving main task accuracy. These results highlight the effectiveness, stealthiness, and adaptability of BAPFL in PFL.
Problem

Research questions and friction points this paper is trying to address.

Explores backdoor attack vulnerability in prototype-based federated learning
Proposes BAPFL method with prototype poisoning and trigger optimization
Tests effectiveness across datasets with improved attack success rates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Prototype poisoning strategy manipulates global prototypes
Trigger optimization mechanism learns stealthy unique triggers
Integrates prototype poisoning with trigger optimization
H
Honghong Zeng
School of Computer Science, Shanghai Jiao Tong University
Jiong Lou
Jiong Lou
Research Assistant Professor, Shanghai Jiao Tong University
Edge computingBlockchain
Z
Zhe Wang
School of Computer Science, Shanghai Jiao Tong University
Hefeng Zhou
Hefeng Zhou
上海交通大学
AIEA
Chentao Wu
Chentao Wu
Professor of Computer Science, Shanghai Jiao Tong University
Data StorageComputer SystemsComputer ArchitectureCloud ComputingAI for Systems
W
Wei Zhao
Shenzhen University of Advanced Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Science
J
Jie Li
School of Computer Science, Shanghai Jiao Tong University; Yancheng Blockchain Research Institute, Shanghai Jiao Tong University (Wuxi) Blockchain Advanced Research Center, Shanghai Key Laboratory of Trusted Data Circulation and Governance, and Web3