🤖 AI Summary
To address the straggler problem caused by heterogeneous devices in secure federated learning, this paper proposes a client-invariant neuron pruning method—the first to integrate pruning into secure aggregation frameworks. Our approach jointly leverages client-side invariance identification and network-aware dynamic pruning, enabling coordinated optimization of computation and communication while preserving privacy. By adaptively pruning model parameters per client according to its computational capability and bandwidth during distributed training, it effectively alleviates both computational and communication bottlenecks. Experiments across multiple benchmark datasets demonstrate a 13%–34% speedup in training time, with model accuracy variations bounded within [−2.6%, +1.3%], striking a favorable efficiency–accuracy trade-off. The core contribution lies in a novel, dynamic, heterogeneity-aware pruning mechanism specifically designed for secure aggregation.
📝 Abstract
Secure federated learning (FL) preserves data privacy during distributed model training. However, deploying such frameworks across heterogeneous devices results in performance bottlenecks, due to straggler clients with limited computational or network capabilities, slowing training for all participating clients. This paper introduces the first straggler mitigation technique for secure aggregation with deep neural networks. We propose CLIP, a client-side invariant neuron pruning technique coupled with network-aware pruning, that addresses compute and network bottlenecks due to stragglers during training with minimal accuracy loss. Our technique accelerates secure FL training by 13% to 34% across multiple datasets (CIFAR10, Shakespeare, FEMNIST) with an accuracy impact of between 1.3% improvement to 2.6% reduction.