🤖 AI Summary
Existing federated recommendation systems suffer from insufficient Byzantine robustness due to sparse aggregation mechanisms, which fail to guarantee reliable model updates under adversarial participation. Method: This paper introduces the first Byzantine-robust modeling framework for federated recommendation, where “per-item aggregation” serves as the atomic unit; based on this, we design the Spattack family of attacks—tailored for knowledge- and capability-constrained adversaries—and supporting hierarchical adaptation to varying attack intensities. Results: Experiments demonstrate that only a small number of malicious clients can effectively stall model convergence and break state-of-the-art robust aggregation defenses (e.g., Krum, Bulyan) across multiple standard recommendation benchmarks (e.g., MovieLens, Amazon-Books). This work uncovers a critical security vulnerability in sparse federated recommendation settings and establishes both foundational theoretical insights and a standardized benchmark attack paradigm for future robust architecture design.
📝 Abstract
To preserve user privacy in recommender systems, federated recommendation (FR) based on federated learning (FL) emerges, keeping the personal data on the local client and updating a model collaboratively. Unlike FL, FR has a unique sparse aggregation mechanism, where the embedding of each item is updated by only partial clients, instead of full clients in a dense aggregation of general FL. Recently, as an essential principle of FL, model security has received increasing attention, especially for Byzantine attacks, where malicious clients can send arbitrary updates. The problem of exploring the Byzantine robustness of FR is particularly critical since in the domains applying FR, e.g., e-commerce, malicious clients can be injected easily by registering new accounts. However, existing Byzantine works neglect the unique sparse aggregation of FR, making them unsuitable for our problem. Thus, we make the first effort to investigate Byzantine attacks on FR from the perspective of sparse aggregation, which is non-trivial: it is not clear how to define Byzantine robustness under sparse aggregations and design Byzantine attacks under limited knowledge/capability. In this paper, we reformulate the Byzantine robustness under sparse aggregation by defining the aggregation for a single item as the smallest execution unit. Then we propose a family of effective attack strategies, named Spattack, which exploit the vulnerability in sparse aggregation and are categorized along the adversary's knowledge and capability. Extensive experimental results demonstrate that Spattack can effectively prevent convergence and even break down defenses under a few malicious clients, raising alarms for securing FR systems.