SketchGuard: Scaling Byzantine-Robust Decentralized Federated Learning via Sketch-Based Screening

📅 2025-10-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Decentralized federated learning (DFL) is vulnerable to Byzantine attacks, yet existing robust defenses—relying on full-model similarity-based filtering—suffer from prohibitive communication and computational overhead, limiting scalability. This paper proposes SketchGuard, the first Byzantine-resilient defense for DFL leveraging Count Sketch, a lightweight linear sketching technique. SketchGuard enables efficient neighbor trustworthiness evaluation via low-dimensional sketches, decoupling model filtering from aggregation: only models from verified trustworthy neighbors are downloaded in full. This design preserves convergence guarantees and privacy while drastically reducing resource consumption. Experiments across multiple datasets and network topologies demonstrate that SketchGuard reduces communication overhead by 50%–70% and cuts computation time by up to 82%, matching state-of-the-art robustness. Crucially, its advantages grow with increasing model size and network connectivity.

Technology Category

Application Category

📝 Abstract
Decentralized Federated Learning (DFL) enables privacy-preserving collaborative training without centralized servers, but remains vulnerable to Byzantine attacks where malicious clients submit corrupted model updates. Existing Byzantine-robust DFL defenses rely on similarity-based neighbor screening that requires every client to exchange and compare complete high-dimensional model vectors with all neighbors in each training round, creating prohibitive communication and computational costs that prevent deployment at web scale. We propose SketchGuard, a general framework that decouples Byzantine filtering from model aggregation through sketch-based neighbor screening. SketchGuard compresses $d$-dimensional models to $k$-dimensional sketches ($k ll d$) using Count Sketch for similarity comparisons, then selectively fetches full models only from accepted neighbors, reducing per-round communication complexity from $O(d|N_i|)$ to $O(k|N_i| + d|S_i|)$, where $|N_i|$ is the neighbor count and $|S_i| le |N_i|$ is the accepted neighbor count. We establish rigorous convergence guarantees in both strongly convex and non-convex settings, proving that Count Sketch compression preserves Byzantine resilience with controlled degradation bounds where approximation errors introduce only a $(1+O(ε))$ factor in the effective threshold parameter. Comprehensive experiments across multiple datasets, network topologies, and attack scenarios demonstrate that SketchGuard maintains identical robustness to state-of-the-art methods while reducing computation time by up to 82% and communication overhead by 50-70% depending on filtering effectiveness, with benefits scaling multiplicatively with model dimensionality and network connectivity. These results establish the viability of sketch-based compression as a fundamental enabler of robust DFL at web scale.
Problem

Research questions and friction points this paper is trying to address.

Addresses Byzantine attack vulnerabilities in decentralized federated learning systems
Reduces prohibitive communication and computation costs in existing defenses
Enables scalable robust learning through sketch-based compression techniques
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses Count Sketch compression for model similarity comparison
Decouples Byzantine filtering from model aggregation process
Selectively fetches full models only from accepted neighbors
🔎 Similar Papers
No similar papers found.