Phantom Subgroup Poisoning: Stealth Attacks on Federated Recommender Systems

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing poisoning attacks against federated recommendation systems (FedRec) lack stealthiness and subgroup specificity, limiting their practical threat. Method: This paper proposes Spattack, the first targeted poisoning attack designed for specific user subgroups (e.g., elderly users), built upon a two-stage approximation-and-propagation framework. It integrates contrastive learning to enhance subgroup embedding discrimination, adaptive weight optimization, and embedding alignment to maximize attack precision on target users while minimizing collateral impact on non-target users. Contribution/Results: Extensive experiments on three real-world datasets demonstrate that Spattack achieves highly effective manipulation of target subgroup recommendations using only 0.1% malicious clients. Crucially, it exhibits strong robustness against seven state-of-the-art defense mechanisms, underscoring its practical viability and severity as a novel threat to FedRec privacy and integrity.

Technology Category

Application Category

📝 Abstract
Federated recommender systems (FedRec) have emerged as a promising solution for delivering personalized recommendations while safeguarding user privacy. However, recent studies have demonstrated their vulnerability to poisoning attacks. Existing attacks typically target the entire user group, which compromises stealth and increases the risk of detection. In contrast, real-world adversaries may prefer to prompt target items to specific user subgroups, such as recommending health supplements to elderly users. Motivated by this gap, we introduce Spattack, the first targeted poisoning attack designed to manipulate recommendations for specific user subgroups in the federated setting. Specifically, Spattack adopts a two-stage approximation-and-promotion strategy, which first simulates user embeddings of target/non-target subgroups and then prompts target items to the target subgroups. To enhance the approximation stage, we push the inter-group embeddings away based on contrastive learning and augment the target group's relevant item set based on clustering. To enhance the promotion stage, we further propose to adaptively tune the optimization weights between target and non-target subgroups. Besides, an embedding alignment strategy is proposed to align the embeddings between the target items and the relevant items. We conduct comprehensive experiments on three real-world datasets, comparing Spattack against seven state-of-the-art poisoning attacks and seven representative defense mechanisms. Experimental results demonstrate that Spattack consistently achieves strong manipulation performance on the specific user subgroup, while incurring minimal impact on non-target users, even when only 0.1% of users are malicious. Moreover, Spattack maintains competitive overall recommendation performance and exhibits strong resilience against existing mainstream defenses.
Problem

Research questions and friction points this paper is trying to address.

Targeted poisoning attacks on federated recommender systems
Manipulate recommendations for specific user subgroups
Enhance stealth and reduce detection risk
Innovation

Methods, ideas, or system contributions that make the work stand out.

Two-stage approximation-and-promotion strategy
Contrastive learning for inter-group embedding
Adaptive optimization weights tuning
B
Bo Yan
Beijing University of Posts and Telecommunications, Beijing, China
Yurong Hao
Yurong Hao
Beijing Jiaotong University
AI SecurityPrivacy-Preserving Computation
D
Dingqi Liu
Beijing University of Posts and Telecommunications, Beijing, China
H
Huabin Sun
Beijing University of Posts and Telecommunications, Beijing, China
Pengpeng Qiao
Pengpeng Qiao
Institute of Science Tokyo (formerly Tokyo Tech)
Wei Yang Bryan Lim
Wei Yang Bryan Lim
Assistant Professor, Nanyang Technological University (NTU), Singapore
Edge IntelligenceFederated LearningApplied AISustainable AI
Y
Yang Cao
Institute of Science Tokyo, Tokyo, Japan
Chuan Shi
Chuan Shi
Beijing University of Posts and Telecommunications
data miningmachine learningsocial network analysis