FedSparQ: Adaptive Sparse Quantization with Error Feedback for Robust&Efficient Federated Learning

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address excessive communication overhead caused by frequent, high-dimensional model updates in federated learning, this paper proposes an adaptive sparse quantization compression framework. The method integrates dynamic gradient sparsification (using an adaptive threshold that eliminates hyperparameter tuning), half-precision quantization, and a residual error feedback mechanism, significantly reducing communication load while preserving model accuracy. The framework is inherently compatible with both IID and non-IID data distributions and diverse model architectures, offering lightweight design and strong scalability. Experiments demonstrate that, compared to FedAvg, the approach reduces communication volume by 90%, improves model accuracy by 6%, and enhances convergence robustness by 50%, consistently outperforming existing compression methods across multiple vision benchmarks. The core innovation lies in the first joint integration of adaptive sparsity thresholds and error feedback into the quantization pipeline, enabling co-optimization of communication efficiency and model performance.

Technology Category

Application Category

📝 Abstract
Federated Learning (FL) enables collaborative model training across decentralized clients while preserving data privacy by keeping raw data local. However, FL suffers from significant communication overhead due to the frequent exchange of high-dimensional model updates over constrained networks. In this paper, we present FedSparQ, a lightweight compression framework that dynamically sparsifies the gradient of each client through an adaptive threshold, applies half-precision quanti- zation to retained entries and integrates residuals from error feedback to prevent loss of information. FedSparQ requires no manual tuning of sparsity rates or quantization schedules, adapts seamlessly to both homogeneous and heterogeneous data distributions, and is agnostic to model architecture. Through extensive empirical evaluation on vision benchmarks under independent and identically distributed (IID) and non-IID data, we show that FedSparQ substantially reduces communication overhead (reducing by 90% of bytes sent compared to FedAvg) while preserving or improving model accuracy (improving by 6% compared to FedAvg non-compressed solution or to state-of-the- art compression models) and enhancing convergence robustness (by 50%, compared to the other baselines). Our approach provides a practical, easy-to-deploy solution for bandwidth- constrained federated deployments and lays the groundwork for future extensions in adaptive precision and privacy-preserving protocols.
Problem

Research questions and friction points this paper is trying to address.

Reduces communication overhead in federated learning by compressing model updates
Adaptively sparsifies and quantizes gradients without manual tuning for efficiency
Maintains model accuracy and convergence robustness across data distributions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive sparse quantization with error feedback
Dynamic sparsification via adaptive threshold
Half-precision quantization with residual integration
C
Chaimaa Medjadji
University of Luxembourg, Luxembourg
S
Sadi Alawadi
Blikinge Institute of Technology, Sweden
Feras M. Awaysheh
Feras M. Awaysheh
Associate Professor of Edge Intelligence, Umea University, Sweden
Cloud Computing/Big DataEdge AIFederated LearningIndustry 4.0/IIoTData Privacy
G
Guilain Leduc
University of Luxembourg, Luxembourg
S
Sylvain Kubler
University of Luxembourg, Luxembourg
Y
Y. L. Traon
University of Luxembourg, Luxembourg