Network Structures as an Attack Surface: Topology-Based Privacy Leakage in Federated Learning

📅 2025-06-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper identifies network topology as a novel attack surface in federated learning (FL), revealing privacy leakage risks even under strong differential privacy guarantees: adversaries can infer clients’ sensitive data distributions using topological knowledge. We systematically propose three gradient-agnostic attack vectors—communication pattern analysis, parameter magnitude profiling, and structural position correlation—and empirically evaluate them across six adversarial knowledge settings, conducting 4,720 attacks. To counter such topology-driven threats, we design a structural noise injection mechanism that synergizes with existing privacy-preserving techniques; experiments across 808 configurations demonstrate up to a 51.4% reduction in attack success rate. Our work establishes the critical role of topology-aware defense in enhancing FL security, bridging a fundamental gap in modeling and mitigating topology-level privacy threats.

Technology Category

Application Category

📝 Abstract
Federated learning systems increasingly rely on diverse network topologies to address scalability and organizational constraints. While existing privacy research focuses on gradient-based attacks, the privacy implications of network topology knowledge remain critically understudied. We conduct the first comprehensive analysis of topology-based privacy leakage across realistic adversarial knowledge scenarios, demonstrating that adversaries with varying degrees of structural knowledge can infer sensitive data distribution patterns even under strong differential privacy guarantees. Through systematic evaluation of 4,720 attack instances, we analyze six distinct adversarial knowledge scenarios: complete topology knowledge and five partial knowledge configurations reflecting real-world deployment constraints. We propose three complementary attack vectors: communication pattern analysis, parameter magnitude profiling, and structural position correlation, achieving success rates of 84.1%, 65.0%, and 47.2% under complete knowledge conditions. Critically, we find that 80% of realistic partial knowledge scenarios maintain attack effectiveness above security thresholds, with certain partial knowledge configurations achieving performance superior to the baseline complete knowledge scenario. To address these vulnerabilities, we propose and empirically validate structural noise injection as a complementary defense mechanism across 808 configurations, demonstrating up to 51.4% additional attack reduction when properly layered with existing privacy techniques. These results establish that network topology represents a fundamental privacy vulnerability in federated learning systems while providing practical pathways for mitigation through topology-aware defense mechanisms.
Problem

Research questions and friction points this paper is trying to address.

Analyzing topology-based privacy leaks in federated learning networks
Evaluating attack effectiveness under varying adversarial knowledge scenarios
Proposing defenses against network-structure-driven privacy vulnerabilities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzing topology-based privacy leakage comprehensively
Proposing three novel attack vectors effectively
Introducing structural noise injection defense mechanism
🔎 Similar Papers
No similar papers found.