From Models to Network Topologies: A Topology Inference Attack in Decentralized Federated Learning

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses a previously overlooked privacy risk in decentralized federated learning (DFL): the inference of communication topology from publicly shared local models—without access to raw data, communication logs, or prior topological knowledge. Method: We propose the first model-behavior-based topology inference attack framework for DFL, introducing a systematic taxonomy of such attacks grounded in adversary capabilities and prior knowledge. Our approach integrates model update similarity analysis, graph-structural modeling, and quantitative evaluation into a unified attack strategy. Contribution/Results: Extensive experiments across standard DFL topologies demonstrate edge identification accuracy exceeding 90%, confirming that network structure can be reconstructed with high fidelity solely from publicly available model updates. This work exposes a critical yet neglected topological leakage vulnerability in DFL and establishes a foundational benchmark and cautionary insight for designing future topology-preserving defenses.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) is widely recognized as a privacy-preserving machine learning paradigm due to its model-sharing mechanism that avoids direct data exchange. However, model training inevitably leaves exploitable traces that can be used to infer sensitive information. In Decentralized FL (DFL), the overlay topology significantly influences its models' convergence, robustness, and security. This study explores the feasibility of inferring the overlay topology of DFL systems based solely on model behavior, introducing a novel Topology Inference Attack. A taxonomy of topology inference attacks is proposed, categorizing them by the attacker's capabilities and knowledge. Practical attack strategies are developed for different scenarios, and quantitative experiments are conducted to identify key factors influencing the attack effectiveness. Experimental results demonstrate that analyzing only the public models of individual nodes can accurately infer the DFL topology, underscoring the risk of sensitive information leakage in DFL systems. This finding offers valuable insights for improving privacy preservation in decentralized learning environments.
Problem

Research questions and friction points this paper is trying to address.

Distributed Team Learning
Topology Inference Attacks
Privacy Protection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed Federated Learning
Topology Inference Attack
Privacy Protection
🔎 Similar Papers
No similar papers found.
Chao Feng
Chao Feng
University of Zurich
networkmachine learningcybersecurity
Y
Yuanzhe Gao
Communication Systems Group, Department of Informatics, University of Zürich, Binzmühlestrasse 14, CH-8050 Zürich, Switzerland
Alberto Huertas Celdran
Alberto Huertas Celdran
University of Murcia
CybersecurityBrain-Computer InterfacesFederated LearningTrusted AI
G
Gerome Bovet
Cyber-Defence Campus, armasuisse Science & Technology, CH-3602 Thun, Switzerland
B
Burkhard Stiller
Communication Systems Group, Department of Informatics, University of Zürich, Binzmühlestrasse 14, CH-8050 Zürich, Switzerland