🤖 AI Summary
This paper addresses a previously overlooked privacy risk in decentralized federated learning (DFL): the inference of communication topology from publicly shared local models—without access to raw data, communication logs, or prior topological knowledge. Method: We propose the first model-behavior-based topology inference attack framework for DFL, introducing a systematic taxonomy of such attacks grounded in adversary capabilities and prior knowledge. Our approach integrates model update similarity analysis, graph-structural modeling, and quantitative evaluation into a unified attack strategy. Contribution/Results: Extensive experiments across standard DFL topologies demonstrate edge identification accuracy exceeding 90%, confirming that network structure can be reconstructed with high fidelity solely from publicly available model updates. This work exposes a critical yet neglected topological leakage vulnerability in DFL and establishes a foundational benchmark and cautionary insight for designing future topology-preserving defenses.
📝 Abstract
Federated Learning (FL) is widely recognized as a privacy-preserving machine learning paradigm due to its model-sharing mechanism that avoids direct data exchange. However, model training inevitably leaves exploitable traces that can be used to infer sensitive information. In Decentralized FL (DFL), the overlay topology significantly influences its models' convergence, robustness, and security. This study explores the feasibility of inferring the overlay topology of DFL systems based solely on model behavior, introducing a novel Topology Inference Attack. A taxonomy of topology inference attacks is proposed, categorizing them by the attacker's capabilities and knowledge. Practical attack strategies are developed for different scenarios, and quantitative experiments are conducted to identify key factors influencing the attack effectiveness. Experimental results demonstrate that analyzing only the public models of individual nodes can accurately infer the DFL topology, underscoring the risk of sensitive information leakage in DFL systems. This finding offers valuable insights for improving privacy preservation in decentralized learning environments.