UnlinkableDFL: a Practical Mixnet Protocol for Churn-Tolerant Decentralized FL Model Sharing

📅 2026-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the vulnerability of participant identity leakage through communication patterns in decentralized federated learning, where existing approaches lack strong anonymity guarantees. To tackle this, the paper proposes the first integration of mix networks with sharded model aggregation, achieving unlinkability of model updates in a fully decentralized setting. Model shards are transmitted via multi-hop encrypted routing and aggregated without requiring any identity information. The design supports dynamic node participation—allowing nodes to join or leave at will—and provides theoretically provable anonymity. Experimental evaluation demonstrates that the system maintains good convergence properties, resilience to node churn, and reasonable resource overhead, introducing only bounded communication latency and effectively balancing anonymity with efficiency.

Technology Category

Application Category

📝 Abstract
Decentralized Federated Learning (DFL) eliminates the need for a central aggregator, but it can expose communication patterns that reveal participant identities. This work presents UnlinkableDFL, a DFL framework that combines a peer-based mixnet with fragment-based model aggregation to ensure unlinkability in fully decentralized settings. Model updates are divided into encrypted fragments, sent over separate multi-hop paths, and aggregated without using any identity information. A theoretical analysis indicates that relay and end-to-end unlinkability improve with larger mixing sets and longer paths, while convergence remains similar to standard FedAvg. A prototype implementation evaluates learning performance, latency, unlinkability, and resource usage. The results show that UnlinkableDFL converges reliably and adapts to node churn. Communication latency emerges as the main overhead, while memory and CPU usage stay moderate. These findings illustrate the balance between anonymity and system efficiency, demonstrating that strong unlinkability can be maintained in decentralized learning workflows.
Problem

Research questions and friction points this paper is trying to address.

Decentralized Federated Learning
Unlinkability
Privacy
Communication Patterns
Participant Anonymity
Innovation

Methods, ideas, or system contributions that make the work stand out.

UnlinkableDFL
mixnet
decentralized federated learning
fragment-based aggregation
unlinkability
Chao Feng
Chao Feng
University of Zurich
networkmachine learningcybersecurity
T
Thomas Grubl
Communication Systems Group, Department of Informatics, University of Zurich, 8050 Zürich, Switzerland
Jan von der Assen
Jan von der Assen
University of Zurich
S
Sandrin Raphael Hunkeler
Communication Systems Group, Department of Informatics, University of Zurich, 8050 Zürich, Switzerland
L
Linn Anna Spitz
Communication Systems Group, Department of Informatics, University of Zurich, 8050 Zürich, Switzerland
G
Gerome Bovet
Cyber-Defence Campus, armasuisse Science & Technology, 3602 Thun, Switzerland
B
Burkhard Stiller
Communication Systems Group, Department of Informatics, University of Zurich, 8050 Zürich, Switzerland