CLIP: Client-Side Invariant Pruning for Mitigating Stragglers in Secure Federated Learning

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the straggler problem caused by heterogeneous devices in secure federated learning, this paper proposes a client-invariant neuron pruning method—the first to integrate pruning into secure aggregation frameworks. Our approach jointly leverages client-side invariance identification and network-aware dynamic pruning, enabling coordinated optimization of computation and communication while preserving privacy. By adaptively pruning model parameters per client according to its computational capability and bandwidth during distributed training, it effectively alleviates both computational and communication bottlenecks. Experiments across multiple benchmark datasets demonstrate a 13%–34% speedup in training time, with model accuracy variations bounded within [−2.6%, +1.3%], striking a favorable efficiency–accuracy trade-off. The core contribution lies in a novel, dynamic, heterogeneity-aware pruning mechanism specifically designed for secure aggregation.

Technology Category

Application Category

📝 Abstract
Secure federated learning (FL) preserves data privacy during distributed model training. However, deploying such frameworks across heterogeneous devices results in performance bottlenecks, due to straggler clients with limited computational or network capabilities, slowing training for all participating clients. This paper introduces the first straggler mitigation technique for secure aggregation with deep neural networks. We propose CLIP, a client-side invariant neuron pruning technique coupled with network-aware pruning, that addresses compute and network bottlenecks due to stragglers during training with minimal accuracy loss. Our technique accelerates secure FL training by 13% to 34% across multiple datasets (CIFAR10, Shakespeare, FEMNIST) with an accuracy impact of between 1.3% improvement to 2.6% reduction.
Problem

Research questions and friction points this paper is trying to address.

Mitigating stragglers in secure federated learning
Addressing compute and network bottlenecks
Accelerating training with minimal accuracy loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Client-side invariant pruning for straggler mitigation
Network-aware pruning addresses compute and network bottlenecks
Minimal accuracy loss while accelerating secure federated learning
🔎 Similar Papers
A
Anthony DiMaggio
University of Toronto, Toronto, Canada
R
Raghav Sharma
University of Toronto, Toronto, Canada
Gururaj Saileshwar
Gururaj Saileshwar
University of Toronto
Hardware SecurityComputer ArchitectureMemory Systems