ADCA: Attention-Driven Multi-Party Collusion Attack in Federated Self-Supervised Learning

📅 2026-02-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses critical limitations of existing backdoor attacks in federated self-supervised learning (FSSL), which rely on a global, uniform trigger that is easily detectable, diluted during model aggregation, and ill-suited to heterogeneous client data. To overcome these challenges, the authors propose an attention-driven, collusive attack framework wherein a coalition of malicious clients decomposes the global trigger during local pretraining, searches for optimal local trigger patterns, and leverages an attention mechanism to dynamically aggregate malicious updates. This approach effectively mitigates the dilution effect caused by benign clients. Notably, it is the first to integrate attention mechanisms into collusive backdoor attacks, replacing global triggers with adaptive local ones to significantly enhance both stealthiness and robustness. Extensive experiments across multiple FSSL settings and four datasets demonstrate substantial improvements in attack success rate and persistence over state-of-the-art methods.

Technology Category

Application Category

📝 Abstract
Federated Self-Supervised Learning (FSSL) integrates the privacy advantages of distributed training with the capability of self-supervised learning to leverage unlabeled data, showing strong potential across applications. However, recent studies have shown that FSSL is also vulnerable to backdoor attacks. Existing attacks are limited by their trigger design, which typically employs a global, uniform trigger that is easily detected, gets diluted during aggregation, and lacks robustness in heterogeneous client environments. To address these challenges, we propose the Attention-Driven multi-party Collusion Attack (ADCA). During local pre-training, malicious clients decompose the global trigger to find optimal local patterns. Subsequently, these malicious clients collude to form a malicious coalition and establish a collaborative optimization mechanism within it. In this mechanism, each submits its model updates, and an attention mechanism dynamically aggregates them to explore the best cooperative strategy. The resulting aggregated parameters serve as the initial state for the next round of training within the coalition, thereby effectively mitigating the dilution of backdoor information by benign updates. Experiments on multiple FSSL scenarios and four datasets show that ADCA significantly outperforms existing methods in Attack Success Rate (ASR) and persistence, proving its effectiveness and robustness.
Problem

Research questions and friction points this paper is trying to address.

Federated Self-Supervised Learning
Backdoor Attack
Trigger Design
Client Heterogeneity
Model Aggregation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attention Mechanism
Multi-Party Collusion
Federated Self-Supervised Learning
Backdoor Attack
Dynamic Aggregation
🔎 Similar Papers
No similar papers found.
J
Jiayao Wang
School of Information and Artificial Intelligence, Yangzhou University, China
Y
Yiping Zhang
School of Information and Artificial Intelligence, Yangzhou University, China
Jiale Zhang
Jiale Zhang
Yangzhou University
AI security and privacyFederated learningBlockchain
W
Wenliang Yuan
College of Data Science, Jiaxing University, China
Q
Qilin Wu
School of Computing and Artificial Intelligence, Chaohu University, China
J
Junwu Zhu
School of Information and Artificial Intelligence, Yangzhou University, China
Dongfang Zhao
Dongfang Zhao
Assistant Professor, University of Washington
DatabasesAIHPCCryptographyArithmetic Geometry