HPE: Hallucinated Positive Entanglement for Backdoor Attacks in Federated Self-Supervised Learning

πŸ“… 2026-02-02
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing backdoor attacks in federated self-supervised learning suffer from low poisoning efficiency, poor transferability, and weak persistence. This work proposes a novel backdoor attack method, HPE, which introduces for the first time hallucinated positive sample augmentation and a feature entanglement mechanism. The former enhances the encoder’s embedding of backdoor features through synthetically generated positive samples, while the latter tightly couples the trigger with backdoor samples in the representation space. Furthermore, HPE integrates selective parameter poisoning with proximity-aware model updates to align the poisoned local model closely with the global model, thereby improving attack stability. Extensive experiments demonstrate that HPE significantly outperforms existing methods across diverse federated self-supervised learning scenarios and datasets, maintaining strong robustness even against multiple state-of-the-art defense mechanisms.

Technology Category

Application Category

πŸ“ Abstract
Federated self-supervised learning (FSSL) enables collaborative training of self-supervised representation models without sharing raw unlabeled data. While it serves as a crucial paradigm for privacy-preserving learning, its security remains vulnerable to backdoor attacks, where malicious clients manipulate local training to inject targeted backdoors. Existing FSSL attack methods, however, often suffer from low utilization of poisoned samples, limited transferability, and weak persistence. To address these limitations, we propose a new backdoor attack method for FSSL, namely Hallucinated Positive Entanglement (HPE). HPE first employs hallucination-based augmentation using synthetic positive samples to enhance the encoder's embedding of backdoor features. It then introduces feature entanglement to enforce tight binding between triggers and backdoor samples in the representation space. Finally, selective parameter poisoning and proximity-aware updates constrain the poisoned model within the vicinity of the global model, enhancing its stability and persistence. Experimental results on several FSSL scenarios and datasets show that HPE significantly outperforms existing backdoor attack methods in performance and exhibits strong robustness under various defense mechanisms.
Problem

Research questions and friction points this paper is trying to address.

backdoor attacks
federated self-supervised learning
poisoned samples
transferability
persistence
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hallucinated Positive Entanglement
Backdoor Attack
Federated Self-Supervised Learning
Feature Entanglement
Selective Parameter Poisoning
πŸ”Ž Similar Papers
J
Jiayao Wang
School of Information Engineering, Yangzhou University, China
Yang Song
Yang Song
Foster School of Business, University of Washington
Finance
Z
Zhendong Zhao
Institute of Information Engineering, Chinese Academy of Sciences, China
Jiale Zhang
Jiale Zhang
Yangzhou University
AI security and privacyFederated learningBlockchain
Q
Qilin Wu
School of Computing and Artificial Intelligence, Chaohu University, China
W
Wenliang Yuan
College of Data Science, Jiaxing University, China
J
Junwu Zhu
School of Information Engineering, Yangzhou University, China
Dongfang Zhao
Dongfang Zhao
Assistant Professor, University of Washington
DatabasesAIHPCCryptographyArithmetic Geometry