Not All Edges are Equally Robust: Evaluating the Robustness of Ranking-Based Federated Learning

📅 2025-03-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work identifies non-uniform robustness across edge nodes in ranking-based federated learning (FRL), revealing precisely localizable vulnerabilities stemming from its majority-voting and discrete ranking-aggregation mechanisms. We provide the first theoretical proof of the existence of layer-wise vulnerable edges in FRL and derive analytically tractable bounds for their identification. Leveraging this insight, we propose Vulnerable-Edge Manipulation (VEM), a novel local model poisoning attack that performs optimization-driven, edge-level perturbations coupled with ranking-aggregation modeling to achieve high-precision adversarial control. Evaluated on standard benchmark datasets, VEM achieves an attack success rate of 53.23%, outperforming state-of-the-art methods by 3.7×. Our findings systematically expose a fundamental security flaw in FRL’s ranking-aggregation layer, underscoring critical weaknesses in current federated recommendation systems. This work delivers both theoretical guarantees and empirical evidence to inform the design of robust federated recommendation frameworks.

Technology Category

Application Category

📝 Abstract
Federated Ranking Learning (FRL) is a state-of-the-art FL framework that stands out for its communication efficiency and resilience to poisoning attacks. It diverges from the traditional FL framework in two ways: 1) it leverages discrete rankings instead of gradient updates, significantly reducing communication costs and limiting the potential space for malicious updates, and 2) it uses majority voting on the server side to establish the global ranking, ensuring that individual updates have minimal influence since each client contributes only a single vote. These features enhance the system's scalability and position FRL as a promising paradigm for FL training. However, our analysis reveals that FRL is not inherently robust, as certain edges are particularly vulnerable to poisoning attacks. Through a theoretical investigation, we prove the existence of these vulnerable edges and establish a lower bound and an upper bound for identifying them in each layer. Based on this finding, we introduce a novel local model poisoning attack against FRL, namely the Vulnerable Edge Manipulation (VEM) attack. The VEM attack focuses on identifying and perturbing the most vulnerable edges in each layer and leveraging an optimization-based approach to maximize the attack's impact. Through extensive experiments on benchmark datasets, we demonstrate that our attack achieves an overall 53.23% attack impact and is 3.7x more impactful than existing methods. Our findings highlight significant vulnerabilities in ranking-based FL systems and underline the urgency for the development of new robust FL frameworks.
Problem

Research questions and friction points this paper is trying to address.

Identifies vulnerabilities in Federated Ranking Learning (FRL) systems.
Proposes Vulnerable Edge Manipulation (VEM) attack targeting weak edges.
Demonstrates VEM attack's high impact, urging robust FL framework development.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses discrete rankings to reduce communication costs
Employs majority voting for global ranking establishment
Introduces Vulnerable Edge Manipulation attack optimization
🔎 Similar Papers
No similar papers found.